Search results for: Marc Dietrich
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 117

Search results for: Marc Dietrich

27 Exploring Antifragility Principles in Humanitarian Supply Chain: The key Role of Information Systems

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

The COVID-19 pandemic has been a major and global disruption that has affected all supply chains on a worldwide scale. Consequently, the question posed by this communication is to understand how - in the face of such disruptions - supply chains, including their actors, management tools, and processes, react, survive, adapt, and even improve. To do so, the concepts of resilience and antifragility applied to a supply chain have been leveraged. This article proposes to perceive resilience as a step to surpass in moving towards antifragility. The research objective is to propose an analytical framework to measure and compare resilience and antifragility, with antifragility seen as a property of a system that improves when subjected to disruptions rather than merely resisting these disruptions, as is the case with resilience. A unique case study was studied - MSF logistics (France) - using a qualitative methodology. Semi-structured interviews were conducted in person and remotely in multiple phases: during and immediately after the COVID crisis (8 interviews from March 2020 to April 2021), followed by a new round from September to November 2023. A Delphi method was employed. The interviews were analyzed using coding and a thematic framework. One of the theoretical contributions is consolidating the field of supply chain resilience research by precisely characterizing the dimensions of resilience for a humanitarian supply chain (Reorganization, Collaboration mediated by IS, Humanitarian culture). In this regard, a managerial contribution of this study is providing a guide for managers to identify the four dimensions and sub-dimensions of supply chain resilience. This enables managers to focus their decisions and actions on dimensions that will enhance resilience. Most importantly, another contribution is comparing the concepts of resilience and antifragility and proposing an analytical framework for antifragility—namely, the mechanisms on which MSF logistics relied to capitalize on uncertainties, contingencies, and shocks rather than simply enduring them. For MSF Logistics, antifragility manifested through the ability to identify opportunities hidden behind the uncertainties and shocks of COVID-19, reducing vulnerability, and fostering a culture that encourages innovation and the testing of new ideas. Logistics, particularly in the humanitarian domain, must be able to adapt to environmental disruptions. In this sense, this study identifies and characterizes the dimensions of resilience implemented by humanitarian logistics. Moreover, this research goes beyond the concept of resilience to propose an analytical framework for the concept of antifragility. The organization studied emerged stronger from the COVID-19 crisis due to the mechanisms we identified, allowing us to characterize antifragility. Finally, the results show that the information system plays a key role in antifragility.

Keywords: antifragility, humanitarian supply chain, information systems, qualitative research, resilience.

Procedia PDF Downloads 64
26 Approach to Honey Volatiles' Profiling by Gas Chromatography and Mass Spectrometry

Authors: Igor Jerkovic

Abstract:

Biodiversity of flora provides many different nectar sources for the bees. Unifloral honeys possess distinctive flavours, mainly derived from their nectar sources (characteristic volatile organic components (VOCs)). Specific or nonspecific VOCs (chemical markers) could be used for unifloral honey characterisation as addition to the melissopalynologycal analysis. The main honey volatiles belong, in general, to three principal categories: terpenes, norisoprenoids, and benzene derivatives. Some of these substances have been described as characteristics of the floral source, and other compounds, like several alcohols, branched aldehydes, and furan derivatives, may be related to the microbial purity of honey processing and storage conditions. Selection of the extraction method for the honey volatiles profiling should consider that heating of the honey produce different artefacts and therefore conventional methods of VOCs isolation (such as hydrodistillation) cannot be applied for the honey. Two-way approach for the isolation of the honey VOCs was applied using headspace solid-phase microextraction (HS-SPME) and ultrasonic solvent extraction (USE). The extracts were analysed by gas chromatography and mass spectrometry (GC-MS). HS-SPME (with the fibers of different polarity such as polydimethylsiloxane/ divinylbenzene (PDMS/DVB) or divinylbenzene/carboxene/ polydimethylsiloxane (DVB/CAR/PDMS)) enabled isolation of high volatile headspace VOCs of the honey samples. Among them, some characteristic or specific compounds can be found such as 3,4-dihydro-3-oxoedulan (in Centaurea cyanus L. honey) or 1H-indole, methyl anthranilate, and cis-jasmone (in Citrus unshiu Marc. honey). USE with different solvents (mainly dichloromethane or the mixture pentane : diethyl ether 1 : 2 v/v) enabled isolation of less volatile and semi-volatile VOCs of the honey samples. Characteristic compounds from C. unshiu honey extracts were caffeine, 1H-indole, 1,3-dihydro-2H-indol-2-one, methyl anthranilate, and phenylacetonitrile. Sometimes, the selection of solvent sequence was useful for more complete profiling such as sequence I: pentane → diethyl ether or sequence II: pentane → pentane/diethyl ether (1:2, v/v) → dichloromethane). The extracts with diethyl ether contained hydroquinone and 4-hydroxybenzoic acid as the major compounds, while (E)-4-(r-1’,t-2’,c-4’-trihydroxy-2’,6’,6’-trimethylcyclo-hexyl)but-3-en-2-one predominated in dichloromethane extracts of Allium ursinum L. honey. With this two-way approach, it was possible to obtain a more detailed insight into the honey volatile and semi-volatile compounds and to minimize the risks of compound discrimination due to their partial extraction that is of significant importance for the complete honey profiling and identification of the chemical biomarkers that can complement the pollen analysis.

Keywords: honey chemical biomarkers, honey volatile compounds profiling, headspace solid-phase microextraction (HS-SPME), ultrasonic solvent extraction (USE)

Procedia PDF Downloads 203
25 On-Ice Force-Velocity Modeling Technical Considerations

Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra

Abstract:

Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.   

Keywords: ice-hockey, sprint, skating, power

Procedia PDF Downloads 100
24 Public Participation for an Effective Flood Risk Management: Building Social Capacities in Ribera Alta Del Ebro, Spain

Authors: Alba Ballester Ciuró, Marc Pares Franzi

Abstract:

While coming decades are likely to see a higher flood risk in Europe and greater socio-economic damages, traditional flood risk management has become inefficient. In response to that, new approaches such as capacity building and public participation have recently been incorporated in natural hazards mitigation policy (i.e. Sendai Framework for Action, Intergovernmental Panel on Climate Change reports and EU Floods Directive). By integrating capacity building and public participation, we present a research concerning the promotion of participatory social capacity building actions for flood risk mitigation at the local level. Social capacities have been defined as the resources and abilities available at individual and collective level that can be used to anticipate, respond to, cope with, recover from and adapt to external stressors. Social capacity building is understood as a process of identifying communities’ social capacities and of applying collaborative strategies to improve them. This paper presents a proposal of systematization of participatory social capacity building process for flood risk mitigation, and its implementation in a high risk of flooding area in the Ebro river basin: Ribera Alta del Ebro. To develop this process, we designed and tested a tool that allows measuring and building five types of social capacities: knowledge, motivation, networks, participation and finance. The tool implementation has allowed us to assess social capacities in the area. Upon the results of the assessment we have developed a co-decision process with stakeholders and flood risk management authorities on which participatory activities could be employed to improve social capacities for flood risk mitigation. Based on the results of this process, and focused on the weaker social capacities, we developed a set of participatory actions in the area oriented to general public and stakeholders: informative sessions on flood risk management plan and flood insurances, interpretative river descents on flood risk management (with journalists, teachers, and general public), interpretative visit to the floodplain, workshop on agricultural insurance, deliberative workshop on project funding, deliberative workshops in schools on flood risk management (playing with a flood risk model). The combination of obtaining data through a mixed-methods approach of qualitative inquiry and quantitative surveys, as well as action research through co-decision processes and pilot participatory activities, show us the significant impact of public participation on social capacity building for flood risk mitigation and contributes to the understanding of which main factors intervene in this process.

Keywords: flood risk management, public participation, risk reduction, social capacities, vulnerability assessment

Procedia PDF Downloads 211
23 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh

Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila

Abstract:

Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.

Keywords: data culture, data-driven organization, data mesh, data quality for business success

Procedia PDF Downloads 135
22 Using a Card Game as a Tool for Developing a Design

Authors: Matthias Haenisch, Katharina Hermann, Marc Godau, Verena Weidner

Abstract:

Over the past two decades, international music education has been characterized by a growing interest in informal learning for formal contexts and a "compositional turn" that has moved from closed to open forms of composing. This change occurs under social and technological conditions that permeate 21st-century musical practices. This forms the background of Musical Communities in the (Post)Digital Age (MusCoDA), a four-year joint research project of the University of Erfurt (UE) and the University of Education Karlsruhe (PHK), funded by the German Federal Ministry of Education and Research (BMBF). Both explore songwriting processes as an example of collective creativity in (post)digital communities, one in formal and the other in informal learning contexts. Collective songwriting will be studied from a network perspective, that will allow us to view boundaries between both online and offline as well as formal and informal or hybrid contexts as permeable and to reconstruct musical learning practices. By comparing these songwriting processes, possibilities for a pedagogical-didactic interweaving of different educational worlds are highlighted. Therefore, the subproject of the University of Erfurt investigates school music lessons with the help of interviews, videography, and network maps by analyzing new digital pedagogical and didactic possibilities. In the first step, the international literature on songwriting in the music classroom was examined for design development. The analysis focused on the question of which methods and practices are circulating in the current literature. Results from this stage of the project form the basis for the first instructional design that will help teachers in planning regular music classes and subsequently reconstruct musical learning practices under these conditions. In analyzing the literature, we noticed certain structural methods and concepts that recur, such as the Building Blocks method and the pre-structuring of the songwriting process. From these findings, we developed a deck of cards that both captures the current state of research and serves as a method for design development. With this deck of cards, both teachers and students themselves can plan their individual songwriting lessons by independently selecting and arranging topic, structure, and action cards. In terms of science communication, music educators' interactions with the card game provide us with essential insights for developing the first design. The overall goal of MusCoDA is to develop an empirical model of collective musical creativity and learning and an instructional design for teaching music in the postdigital age.

Keywords: card game, collective songwriting, community of practice, network, postdigital

Procedia PDF Downloads 64
21 Glasshouse Experiment to Improve Phytomanagement Solutions for Cu-Polluted Mine Soils

Authors: Marc Romero-Estonllo, Judith Ramos-Castro, Yaiza San Miguel, Beatriz Rodríguez-Garrido, Carmela Monterroso

Abstract:

Mining activity is among the main sources of trace and heavy metal(loid) pollution worldwide, which is a hazard to human and environmental health. That is why several projects have been emerging for the remediation of such polluted places. Phytomanagement strategies draw good performances besides big side benefits. In this work, a glasshouse assay with trace element polluted soils from an old Cu mine ore (NW of Spain) which forms part of the PhytoSUDOE network of phytomanaged contaminated field sites (PhytoSUDOE Project (SOE1/P5/E0189)) was set. The objective was to evaluate improvements induced by the following phytoremediation-related treatments. Three increasingly complex amendments alone or together with plant growth (Populus nigra L. alone and together with Tripholium repens L.) were tested. And three different rhizosphere bioinocula were applied (Plant Growth Promoting Bacteria (PGP), mycorrhiza (MYC), or mixed (PGP+MYC)). After 110 days of growth, plants were collected, biomass was weighed, and tree length was measured. Physical-chemical analyses were carried out to determine pH, effective Cation Exchange Capacity, carbon and nitrogen contents, bioavailable phosphorous (Olsen bicarbonate method), pseudo total element content (microwave acid digested fraction), EDTA extractable metals (complexed fraction), and NH4NO3 extractable metals (easily bioavailable fraction). On plant material, nitrogen content and acid digestion elements were determined. Amendment usage, plant growth, and bioinoculation were demonstrated to improve soil fertility and/or plant health within the time span of this study. Particularly, pH levels increased from 3 (highly acidic) to 5 (acidic) in the worst-case scenario, even reaching 7 (neutrality) in the best plots. Organic matter and pH increments were related to polluting metals’ bioavailability decrements. Plants grew better both with the most complex amendment and the middle one, with few differences due to bioinoculation. Using the less complex amendment (just compost) beneficial effects of bioinoculants were more observable, although plants didn’t thrive very well. On unamended soils, plants neither sprouted nor bloomed. The scheme assayed in this study is suitable for phytomanagement of these kinds of soils affected by mining activity. These findings should be tested now on a larger scale.

Keywords: aided phytoremediation, mine pollution, phytostabilization, soil pollution, trace elements

Procedia PDF Downloads 66
20 From Social Equity to Spatial Equity in Urban Space: Precedent Study Approach

Authors: Dorsa Pourmojib, Marc J. Boutin

Abstract:

Urban space is used everyday by a diverse range of urban dwellers, each with different expectations. In this space, opportunities and resources are not distributed equitably among urban dwellers, despite the importance of inclusivity. In addition, some marginalized groups may not be considered. These include people with low incomes, immigrants from diverse cultures, various age groups, and those with special needs. To this end, this research aims to enhance social equity in urban space by bridging the gap between social equity and spatial equity in the urban context. This gap in the knowledge base related to urban design may be present for several reasons; lack of studies on relationship between social equity and spatial equity in urban open space, lack of practical design strategies for promoting social equity in urban open space, lack of proper site analysis in terms of context and users of the site both for designing new urban open spaces and developing the existing ones, and lack of researchers that are designers and finally it could be related to priorities of the city’s policies in addressing such issues, since it is time, money and energy consuming. The main objective of this project is addressing the aforementioned gap in the knowledge by exploring the relationship between social equity and spatial equity in urban open space. Answering the main question of this research is a promising step to this end; 'What are the considerations towards providing social equity through the design of urban elements that offer spatial equity?' To answer the main question of this research there are several secondary questions which should be addressed. Such as; how can the characteristics of social equity be translated to spatial equity? What are the diverse user’s needs and which of their needs are not considered in that site? What are the specific elements in the site which should be designed in order to promote social equity? What is the current situation of social and spatial equity in the proposed site? To answer the research questions and achieve the proposed objectives, a three-step methodology has been implemented. Firstly, a comprehensive research framework based on the available literature has been presented. Afterwards, three different urban spaces have been analyzed in terms of specific key research questions as the precedent studies; Naqsh-e Jahan Square (Iran), Superkilen Park (Denmark) and Campo Dei Fiori (Italy). In this regard, a proper gap analysis of the current situation and the proposed situation of these sites has been conducted. Finally, by combining the extracted design considerations from the precedent studies and the literature review, practical design strategies have been introduced as a result of this research. The presented guidelines enable the designers to create socially equitable urban spaces. To conclude, this research proposes a spatial approach to social inclusion and equity in urban space by presenting a practical framework and criteria for translating social equity to spatial equity in urban areas.

Keywords: inclusive urban design, social equity, social inclusion, spatial equity

Procedia PDF Downloads 143
19 The Power of in situ Characterization Techniques in Heterogeneous Catalysis: A Case Study of Deacon Reaction

Authors: Ramzi Farra, Detre Teschner, Marc Willinger, Robert Schlögl

Abstract:

Introduction: The conventional approach of characterizing solid catalysts under static conditions, i.e., before and after reaction, does not provide sufficient knowledge on the physicochemical processes occurring under dynamic conditions at the molecular level. Hence, the necessity of improving new in situ characterizing techniques with the potential of being used under real catalytic reaction conditions is highly desirable. In situ Prompt Gamma Activation Analysis (PGAA) is a rapidly developing chemical analytical technique that enables us experimentally to assess the coverage of surface species under catalytic turnover and correlate these with the reactivity. The catalytic HCl oxidation (Deacon reaction) over bulk ceria will serve as our example. Furthermore, the in situ Transmission Electron Microscopy is a powerful technique that can contribute to the study of atmosphere and temperature induced morphological or compositional changes of a catalyst at atomic resolution. The application of such techniques (PGAA and TEM) will pave the way to a greater and deeper understanding of the dynamic nature of active catalysts. Experimental/Methodology: In situ Prompt Gamma Activation Analysis (PGAA) experiments were carried out to determine the Cl uptake and the degree of surface chlorination under reaction conditions by varying p(O2), p(HCl), p(Cl2), and the reaction temperature. The abundance and dynamic evolution of OH groups on working catalyst under various steady-state conditions were studied by means of in situ FTIR with a specially designed homemade transmission cell. For real in situ TEM we use a commercial in situ holder with a home built gas feeding system and gas analytics. Conclusions: Two complimentary in situ techniques, namely in situ PGAA and in situ FTIR were utilities to investigate the surface coverage of the two most abundant species (Cl and OH). The OH density and Cl uptake were followed under multiple steady-state conditions as a function of p(O2), p(HCl), p(Cl2), and temperature. These experiments have shown that, the OH density positively correlates with the reactivity whereas Cl negatively. The p(HCl) experiments give rise to increased activity accompanied by Cl-coverage increase (opposite trend to p(O2) and T). Cl2 strongly inhibits the reaction, but no measurable increase of the Cl uptake was found. After considering all previous observations we conclude that only a minority of the available adsorption sites contribute to the reactivity. In addition, the mechanism of the catalysed reaction was proposed. The chlorine-oxygen competition for the available active sites renders re-oxidation as the rate-determining step of the catalysed reaction. Further investigations using in situ TEM are planned and will be conducted in the near future. Such experiments allow us to monitor active catalysts at the atomic scale under the most realistic conditions of temperature and pressure. The talk will shed a light on the potential and limitations of in situ PGAA and in situ TEM in the study of catalyst dynamics.

Keywords: CeO2, deacon process, in situ PGAA, in situ TEM, in situ FTIR

Procedia PDF Downloads 291
18 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen

Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin

Abstract:

Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.

Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay

Procedia PDF Downloads 90
17 Identification and Understanding of Colloidal Destabilization Mechanisms in Geothermal Processes

Authors: Ines Raies, Eric Kohler, Marc Fleury, Béatrice Ledésert

Abstract:

In this work, the impact of clay minerals on the formation damage of sandstone reservoirs is studied to provide a better understanding of the problem of deep geothermal reservoir permeability reduction due to fine particle dispersion and migration. In some situations, despite the presence of filters in the geothermal loop at the surface, particles smaller than the filter size (<1 µm) may surprisingly generate significant permeability reduction affecting in the long term the overall performance of the geothermal system. Our study is carried out on cores from a Triassic reservoir in the Paris Basin (Feigneux, 60 km Northeast of Paris). Our goal is to first identify the clays responsible for clogging, a mineralogical characterization of these natural samples was carried out by coupling X-Ray Diffraction (XRD), Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The results show that the studied stratigraphic interval contains mostly illite and chlorite particles. Moreover, the spatial arrangement of the clays in the rocks as well as the morphology and size of the particles, suggest that illite is more easily mobilized than chlorite by the flow in the pore network. Thus, based on these results, illite particles were prepared and used in core flooding in order to better understand the factors leading to the aggregation and deposition of this type of clay particles in geothermal reservoirs under various physicochemical and hydrodynamic conditions. First, the stability of illite suspensions under geothermal conditions has been investigated using different characterization techniques, including Dynamic Light Scattering (DLS) and Scanning Transmission Electron Microscopy (STEM). Various parameters such as the hydrodynamic radius (around 100 nm), the morphology and surface area of aggregates were measured. Then, core-flooding experiments were carried out using sand columns to mimic the permeability decline due to the injection of illite-containing fluids in sandstone reservoirs. In particular, the effects of ionic strength, temperature, particle concentration and flow rate of the injected fluid were investigated. When the ionic strength increases, a permeability decline of more than a factor of 2 could be observed for pore velocities representative of in-situ conditions. Further details of the retention of particles in the columns were obtained from Magnetic Resonance Imaging and X-ray Tomography techniques, showing that the particle deposition is nonuniform along the column. It is clearly shown that very fine particles as small as 100 nm can generate significant permeability reduction under specific conditions in high permeability porous media representative of the Triassic reservoirs of the Paris basin. These retention mechanisms are explained in the general framework of the DLVO theory

Keywords: geothermal energy, reinjection, clays, colloids, retention, porosity, permeability decline, clogging, characterization, XRD, SEM-EDS, STEM, DLS, NMR, core flooding experiments

Procedia PDF Downloads 177
16 TRAC: A Software Based New Track Circuit for Traffic Regulation

Authors: Jérôme de Reffye, Marc Antoni

Abstract:

Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.

Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling

Procedia PDF Downloads 332
15 Associations Between Pornography Use Motivations and Sexual Satisfaction in Gender Diverse and Cisgender Individuals in the 43-Country International Sex Survey

Authors: Aurélie Michaud, Émilie Gaudet, Mónika Koós, Léna Nagy, Zsolt Demetrovics, Shane W. Kraus, Marc N. Potenza, Beáta Bőthe

Abstract:

Pornography use is prevalent among adults worldwide. Prior studies have assessed the associations between pornography use frequency and sexual satisfaction, in cisgender and heterosexual individuals, with mixed results. However, measuring pornography use solely by pornography use frequency is problematic, as it can lead to disregarding important contextual factors that may be related to pornography use’s potential effects. Pornography use motivations (PUMs) represent key predictors of sexual behaviors. Yet, their associations with different indicators of sexual wellbeing have yet to be extensively studied. This cross-cultural study examined the links between the eight PUMs most often reported in the general population (i.e. sexual pleasure, sexual curiosity, emotional distraction or suppression, fantasy, stress reduction, boredom avoidance, lack of sexual satisfaction, and self-exploration) and sexual satisfaction in gender diverse and cisgender individuals. Given the lack of scientific data on associations between individuals’ PUMs and sexual satisfaction, these links were examined in an exploratory manner. A total of 43 countries from five continents were included in the International Sex Survey (ISS). A secure online platform was used to collect self-report, anonymous data from 82,243 participants (39.6% men, 57% women, 3.4% gender diverse individuals; M = 32.4 years, SD = 12.5). Gender-based differences in levels of sexual pleasure, sexual curiosity, emotional distraction, fantasy, stress reduction, boredom avoidance, lack of sexual satisfaction, and self-exploration PUMs were examined using one-way ANOVAs. Then, for each gender group, the associations between each PUM and sexual satisfaction were examined using multiple linear regression, controlling for frequency of masturbation. One-way ANOVAs indicated significant differences between men, women, and gender diverse individuals on all PUMs. For sexual pleasure, sexual curiosity, fantasy, boredom avoidance, lack of sexual satisfaction, emotional distraction, and stress reduction PUMs, men showed the highest scores, followed by gender-diverse individuals, and women. However, for self-exploration, gender-diverse individuals had higher average scores than men. For all PUMs, women’s average scores were the lowest. After controlling for frequency of masturbation, for all genders, sexual pleasure, sexual curiosity and boredom avoidance were significant positive predictors of sexual satisfaction, while lack of sexual satisfaction PUM was a significant negative predictor. Fantasy, stress reduction and self-exploration PUMs were positive significant predictors of sexual satisfaction, and fantasy was a negative significant predictor, but only for women. Findings highlight important gender differences in regards to the main motivations underlying pornography use and their relations to sexual satisfaction. While men and gender diverse individuals show similar motivation profiles, woman report a particularly unique experience, with fantasy, stress reduction and self-exploration being associated to their sexual satisfaction. This work outlines the importance of considering the role of pornography use motivations when studying the links between pornography viewing and sexual well-being, and may provide basis for gender-based considerations when working with individuals seeking help for their pornography use or sexual satisfaction.

Keywords: pornography, sexual satifsaction, cross-cultural, gender diversity

Procedia PDF Downloads 106
14 Machine Learning Analysis of Eating Disorders Risk, Physical Activity and Psychological Factors in Adolescents: A Community Sample Study

Authors: Marc Toutain, Pascale Leconte, Antoine Gauthier

Abstract:

Introduction: Eating Disorders (ED), such as anorexia, bulimia, and binge eating, are psychiatric illnesses that mostly affect young people. The main symptoms concern eating (restriction, excessive food intake) and weight control behaviors (laxatives, vomiting). Psychological comorbidities (depression, executive function disorders, etc.) and problematic behaviors toward physical activity (PA) are commonly associated with ED. Acquaintances on ED risk factors are still lacking, and more community sample studies are needed to improve prevention and early detection. To our knowledge, studies are needed to specifically investigate the link between ED risk level, PA, and psychological risk factors in a community sample of adolescents. The aim of this study is to assess the relation between ED risk level, exercise (type, frequency, and motivations for engaging in exercise), and psychological factors based on the Jacobi risk factors model. We suppose that a high risk of ED will be associated with the practice of high caloric cost PA, motivations oriented to weight and shape control, and psychological disturbances. Method: An online survey destined for students has been sent to several middle schools and colleges in northwest France. This survey combined several questionnaires, the Eating Attitude Test-26 assessing ED risk; the Exercise Motivation Inventory–2 assessing motivations toward PA; the Hospital Anxiety and Depression Scale assessing anxiety and depression, the Contour Drawing Rating Scale; and the Body Esteem Scale assessing body dissatisfaction, Rosenberg Self-esteem Scale assessing self-esteem, the Exercise Dependence Scale-Revised assessing PA dependence, the Multidimensional Assessment of Interoceptive Awareness assessing interoceptive awareness and the Frost Multidimensional Perfectionism Scale assessing perfectionism. Machine learning analysis will be performed in order to constitute groups with a tree-based model clustering method, extract risk profile(s) with a bootstrap method comparison, and predict ED risk with a prediction method based on a decision tree-based model. Expected results: 1044 complete records have already been collected, and the survey will be closed at the end of May 2022. Records will be analyzed with a clustering method and a bootstrap method in order to reveal risk profile(s). Furthermore, a predictive tree decision method will be done to extract an accurate predictive model of ED risk. This analysis will confirm typical main risk factors and will give more data on presumed strong risk factors such as exercise motivations and interoceptive deficit. Furthermore, it will enlighten particular risk profiles with a strong level of proof and greatly contribute to improving the early detection of ED and contribute to a better understanding of ED risk factors.

Keywords: eating disorders, risk factors, physical activity, machine learning

Procedia PDF Downloads 83
13 Effect of Time on Stream on the Performances of Plasma Assisted Fe-Doped Cryptomelanes in Trichloroethylene (TCE) Oxidation

Authors: Sharmin Sultana, Nicolas Nuns, Pardis Simon, Jean-Marc Giraudon, Jean-Francois Lamonior, Nathalie D. Geyter, Rino Morent

Abstract:

Environmental issues, especially air pollution, have become a huge concern of environmental legislation as a consequence of growing awareness in our global world. In this regard, control of volatile organic compounds (VOCs) emission has become an important issue due to their potential toxicity, carcinogenicity, and mutagenicity. The research of innovative technologies for VOC abatement is stimulated to accommodate the new stringent standards in terms of VOC emission. One emerging strategy is the coupling of 2 existing complementary technologies, namely here non-thermal plasma (NTP) and heterogeneous catalysis, to get a more efficient process for VOC removal in air. The objective of this current work is to investigate the abatement of trichloroethylene (TCE-highly toxic chlorinated VOC) from moist air (RH=15%) as a function of time by combined use of multi-pin-to-plate negative DC corona/glow discharge with Fe-doped cryptomelanes catalyst downstream i.e. post plasma-catalysis (PPC) process. For catalyst alone case, experiments reveal that, initially, Fe doped cryptomelane (regardless the mode of Fe incorporation by co-precipitation (Fe-K-OMS-2)/ impregnation (Fe/K-OMS-2)) exhibits excellent activity to decompose TCE compared to cryptomelane (K-OMS-2) itself. A maximum obtained value of TCE abatement after 6 min is as follows: Fe-KOMS-2 (73.3%) > Fe/KOMS-2 (48.5) > KOMS-2 (22.6%). However, with prolonged operation time, whatever the catalyst under concern, the abatement of TCE decreases. After 111 min time of exposure, the catalysts can be ranked as follows: Fe/KOMS-2 (11%) < K-OMS-2 (12.3%) < Fe-KOMS-2 (14.5%). Clearly, this phenomenon indicates catalyst deactivation either by chlorination or by blocking the active sites. Remarkably, in PPC configuration (energy density = 60 J/L, catalyst temperature = 150°C), experiments reveal an enhanced performance towards TCE removal regardless the type of catalyst. After 6 min time on stream, the TCE removal efficiency amount as follows: K-OMS-2 (60%) < Fe/K-OMS-2 (79%) < Fe-K-OMS-2 (99.3%). The enhanced performances over Fe-K-OMS-2 catalyst are attributed to its high surface oxygen mobility and structural defects leading to high O₃ decomposition efficiency to give active species able to oxidize the plasma processed hazardous\by-products and the possibly remaining VOC into CO₂. Moreover, both undoped and doped catalysts remain strongly capable to abate TCE with time on stream. The TCE removal efficiencies of the PPC processes with Fe/KOMS-2 and KOMS-2 catalysts are not affected by time on stream indicating an excellent catalyst stability. When using the Fe-K-OMS-2 as catalyst, TCE abatement slightly reduces with time on stream. However, it is noteworthy to stress that still a constant abatement of 83% is observed during at least 30 minutes. These results prove that the combination of NTP with catalysts not only increases the catalytic activity but also allows to avoid, to some extent, the poisoning of catalytic sites resulting in an enhanced catalyst stability. In order to better understand the different surface processes occurring in the course of the total TCE oxidation in PPC experiments, a detailed X-ray Photoelectron Spectroscopy (XPS) and Time of Flight-Secondary Ion Mass Spectrometry (ToF-SIMS) study on the fresh and used catalysts is in progress.

Keywords: Fe doped cryptomelane, non-thermal plasma, plasma-catalysis, stability, trichloroethylene

Procedia PDF Downloads 208
12 Augmented Reality to Support the Design of Innovative Agroforestry Systems

Authors: Laetitia Lemiere, Marie Gosme, Gerard Subsol, Marc Jaeger

Abstract:

Agroforestry is recognized as a way of developing sustainable and resilient agriculture that can fight against climate change. However, the number of species combinations, spatial configurations, and management options for trees and crops is vast. These choices must be adapted to the pedoclimatic and socio-economic contexts and to the objectives of the farmer, who therefore needs support in designing his system. Participative design workshops are a good way to integrate the knowledge of several experts in order to design such complex systems. The design of agroforestry systems should take into account both spatial aspects (e.g., spacing of trees within the lines and between lines, tree line orientation, tree-crop distance, species spatial patterns) and temporal aspects (e.g., crop rotations, tree thinning and pruning, tree planting in the case of successional agroforestry). Furthermore, the interactions between trees and crops evolve as the trees grow. However, agroforestry design workshops generally emphasize the spatial aspect only through the use of static tokens to represent the different species when designing the spatial configuration of the system. Augmented reality (AR) may overcome this limitation, allowing to visualize dynamic representations of trees and crops, and also their interactions, while at the same time retaining the possibility to physically interact with the system being designed (i.e., move trees, add or remove species, etc.). We propose an ergonomic digital solution capable of assisting a group of agroforestry experts to design an agroforestry system and to represent it. We investigated the use of web-based marker-based AR that does not require specific hardware and does not require specific installation so that all users could use their own smartphones right out of the pocket. We developed a prototype mobilizing the AR.js, ArToolKit.js, and Three.js open source libraries. In our implementation, we gradually build a virtual agroforestry system pattern scene from the users' interactions. A specific set of markers initialize the scene properties, and the various plant species are added and located during the workshop design session. The full virtual scene, including the trees positions with their neighborhood, are saved for further uses, such as virtual, augmented instantiation in the farmer fields. The number of tree species available in the application is gradually increasing; we mobilize 3D digital models for walnut, poplar, wild cherry, and other popular species used in agroforestry systems. The prototype allows shadow computations and the representation of trees at various growth stages, as well as different tree generations, and is thus able to visualize the dynamics of the system over time. Future work will focus on i) the design of complex patterns mobilizing several tree/shrub organizations, not restricted to lines; ii) the design of interfaces related to cultural practices, such as clearing or pruning; iii) the representation of tree-crop interactions. Beside tree shade (light competition), our objective is to represent also below-ground competitions (water, nitrogen) or other variables of interest for the design of agroforestry systems (e.g., predicted crop yield).

Keywords: agroforestry system design, augmented reality, marker-based AR, participative design, web-based AR

Procedia PDF Downloads 175
11 Inferring Influenza Epidemics in the Presence of Stratified Immunity

Authors: Hsiang-Yu Yuan, Marc Baguelin, Kin O. Kwok, Nimalan Arinaminpathy, Edwin Leeuwen, Steven Riley

Abstract:

Traditional syndromic surveillance for influenza has substantial public health value in characterizing epidemics. Because the relationship between syndromic incidence and the true infection events can vary from one population to another and from one year to another, recent studies rely on combining serological test results with syndromic data from traditional surveillance into epidemic models to make inference on epidemiological processes of influenza. However, despite the widespread availability of serological data, epidemic models have thus far not explicitly represented antibody titre levels and their correspondence with immunity. Most studies use dichotomized data with a threshold (Typically, a titre of 1:40 was used) to define individuals as likely recently infected and likely immune and further estimate the cumulative incidence. Underestimation of Influenza attack rate could be resulted from the dichotomized data. In order to improve the use of serosurveillance data, here, a refinement of the concept of the stratified immunity within an epidemic model for influenza transmission was proposed, such that all individual antibody titre levels were enumerated explicitly and mapped onto a variable scale of susceptibility in different age groups. Haemagglutination inhibition titres from 523 individuals and 465 individuals during pre- and post-pandemic phase of the 2009 pandemic in Hong Kong were collected. The model was fitted to serological data in age-structured population using Bayesian framework and was able to reproduce key features of the epidemics. The effects of age-specific antibody boosting and protection were explored in greater detail. RB was defined to be the effective reproductive number in the presence of stratified immunity and its temporal dynamics was compared to the traditional epidemic model using use dichotomized seropositivity data. Deviance Information Criterion (DIC) was used to measure the fitness of the model to serological data with different mechanisms of the serological response. The results demonstrated that the differential antibody response with age was present (ΔDIC = -7.0). The age-specific mixing patterns with children specific transmissibility, rather than pre-existing immunity, was most likely to explain the high serological attack rates in children and low serological attack rates in elderly (ΔDIC = -38.5). Our results suggested that the disease dynamics and herd immunity of a population could be described more accurately for influenza when the distribution of immunity was explicitly represented, rather than relying only on the dichotomous states 'susceptible' and 'immune' defined by the threshold titre (1:40) (ΔDIC = -11.5). During the outbreak, RB declined slowly from 1.22[1.16-1.28] in the first four months after 1st May. RB dropped rapidly below to 1 during September and October, which was consistent to the observed epidemic peak time in the late September. One of the most important challenges for infectious disease control is to monitor disease transmissibility in real time with statistics such as the effective reproduction number. Once early estimates of antibody boosting and protection are obtained, disease dynamics can be reconstructed, which are valuable for infectious disease prevention and control.

Keywords: effective reproductive number, epidemic model, influenza epidemic dynamics, stratified immunity

Procedia PDF Downloads 260
10 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations

Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra

Abstract:

The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.

Keywords: electromyography, EMG normalization, functional EMG, older adults

Procedia PDF Downloads 91
9 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems

Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra

Abstract:

Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.

Keywords: automated, biomechanics, team-sports, sprint

Procedia PDF Downloads 119
8 Songwriting in the Postdigital Age: Using TikTok and Instagram as Online Informal Learning Technologies

Authors: Matthias Haenisch, Marc Godau, Julia Barreiro, Dominik Maxelon

Abstract:

In times of ubiquitous digitalization and the increasing entanglement of humans and technologies in musical practices in the 21st century, it is to be asked, how popular musicians learn in the (post)digital Age. Against the backdrop of the increasing interest in transferring informal learning practices into formal settings of music education the interdisciplinary research association »MusCoDA – Musical Communities in the (Post)Digital Age« (University of Erfurt/University of Applied Sciences Clara Hoffbauer Potsdam, funded by the German Ministry of Education and Research, pursues the goal to derive an empirical model of collective songwriting practices from the study of informal lelearningf songwriters and bands that can be translated into pedagogical concepts for music education in schools. Drawing on concepts from Community of Musical Practice and Actor Network Theory, lelearnings considered not only as social practice and as participation in online and offline communities, but also as an effect of heterogeneous networks composed of human and non-human actors. Learning is not seen as an individual, cognitive process, but as the formation and transformation of actor networks, i.e., as a practice of assembling and mediating humans and technologies. Based on video stimulated recall interviews and videography of online and offline activities, songwriting practices are followed from the initial idea to different forms of performance and distribution. The data evaluation combines coding and mapping methods of Grounded Theory Methodology and Situational Analysis. This results in network maps in which both the temporality of creative practices and the material and spatial relations of human and technological actors are reconstructed. In addition, positional analyses document the power relations between the participants that structure the learning process of the field. In the area of online informal lelearninginitial key research findings reveal a transformation of the learning subject through the specific technological affordances of TikTok and Instagram and the accompanying changes in the learning practices of the corresponding online communities. Learning is explicitly shaped by the material agency of online tools and features and the social practices entangled with these technologies. Thus, any human online community member can be invited to directly intervene in creative decisions that contribute to the further compositional and structural development of songs. At the same time, participants can provide each other with intimate insights into songwriting processes in progress and have the opportunity to perform together with strangers and idols. Online Lelearnings characterized by an increase in social proximity, distribution of creative agency and informational exchange between participants. While it seems obvious that traditional notions not only of lelearningut also of the learning subject cannot be maintained, the question arises, how exactly the observed informal learning practices and the subject that emerges from the use of social media as online learning technologies can be transferred into contexts of formal learning

Keywords: informal learning, postdigitality, songwriting, actor-network theory, community of musical practice, social media, TikTok, Instagram, apps

Procedia PDF Downloads 127
7 The Effects of Labeling Cues on Sensory and Affective Responses of Consumers to Categories of Functional Food Carriers: A Mixed Factorial ANOVA Design

Authors: Hedia El Ourabi, Marc Alexandre Tomiuk, Ahmed Khalil Ben Ayed

Abstract:

The aim of this study is to investigate the effects of the labeling cues traceability (T), health claim (HC), and verification of health claim (VHC) on consumer affective response and sensory appeal toward a wide array of functional food carriers (FFC). Predominantly, research in the food area has tended to examine the effects of these information cues independently on cognitive responses to food product offerings. Investigations and findings of potential interaction effects among these factors on effective response and sensory appeal are therefore scant. Moreover, previous studies have typically emphasized single or limited sets of functional food products and categories. In turn, this study considers five food product categories enriched with omega-3 fatty acids, namely: meat products, eggs, cereal products, dairy products and processed fruits and vegetables. It is, therefore, exhaustive in scope rather than exclusive. An investigation of the potential simultaneous effects of these information cues on the affective responses and sensory appeal of consumers should give rise to important insights to both functional food manufacturers and policymakers. A mixed (2 x 3) x (2 x 5) between-within subjects factorial ANOVA design was implemented in this study. T (two levels: completely traceable or non-traceable) and HC (three levels: functional health claim, or disease risk reduction health claim, or disease prevention health claim) were treated as between-subjects factors whereas VHC (two levels: by a government agency and by a non-government agency) and FFC (five food categories) were modeled as within-subjects factors. Subjects were randomly assigned to one of the six between-subjects conditions. A total of 463 questionnaires were obtained from a convenience sample of undergraduate students at various universities in the Montreal and Ottawa areas (in Canada). Consumer affective response and sensory appeal were respectively measured via the following statements assessed on seven-point semantic differential scales: ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unlikeable (1) / Likeable (7)’ and ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unappetizing (1) / Appetizing (7).’ Results revealed a significant interaction effect between HC and VHC on consumer affective response as well as on sensory appeal toward foods enriched with omega-3 fatty acids. On the other hand, the three-way interaction effect between T, HC, and VHC on either of the two dependent variables was not significant. However, the triple interaction effect among T, VHC, and FFC was significant on consumer effective response and the interaction effect among T, HC, and FFC was significant on consumer sensory appeal. Findings of this study should serve as impetus for functional food manufacturers to closely cooperate with policymakers in order to improve on and legitimize the use of health claims in their marketing efforts through credible verification practices and protocols put in place by trusted government agencies. Finally, both functional food manufacturers and retailers may benefit from the socially-responsible image which is conveyed by product offerings whose ingredients remain traceable from farm to kitchen table.

Keywords: functional foods, labeling cues, effective appeal, sensory appeal

Procedia PDF Downloads 164
6 Quantified Metabolomics for the Determination of Phenotypes and Biomarkers across Species in Health and Disease

Authors: Miroslava Cuperlovic-Culf, Lipu Wang, Ketty Boyle, Nadine Makley, Ian Burton, Anissa Belkaid, Mohamed Touaibia, Marc E. Surrette

Abstract:

Metabolic changes are one of the major factors in the development of a variety of diseases in various species. Metabolism of agricultural plants is altered the following infection with pathogens sometimes contributing to resistance. At the same time, pathogens use metabolites for infection and progression. In humans, metabolism is a hallmark of cancer development for example. Quantified metabolomics data combined with other omics or clinical data and analyzed using various unsupervised and supervised methods can lead to better diagnosis and prognosis. It can also provide information about resistance as well as contribute knowledge of compounds significant for disease progression or prevention. In this work, different methods for metabolomics quantification and analysis from Nuclear Magnetic Resonance (NMR) measurements that are used for investigation of disease development in wheat and human cells will be presented. One-dimensional 1H NMR spectra are used extensively for metabolic profiling due to their high reliability, wide range of applicability, speed, trivial sample preparation and low cost. This presentation will describe a new method for metabolite quantification from NMR data that combines alignment of spectra of standards to sample spectra followed by multivariate linear regression optimization of spectra of assigned metabolites to samples’ spectra. Several different alignment methods were tested and multivariate linear regression result has been compared with other quantification methods. Quantified metabolomics data can be analyzed in the variety of ways and we will present different clustering methods used for phenotype determination, network analysis providing knowledge about the relationships between metabolites through metabolic network as well as biomarker selection providing novel markers. These analysis methods have been utilized for the investigation of fusarium head blight resistance in wheat cultivars as well as analysis of the effect of estrogen receptor and carbonic anhydrase activation and inhibition on breast cancer cell metabolism. Metabolic changes in spikelet’s of wheat cultivars FL62R1, Stettler, MuchMore and Sumai3 following fusarium graminearum infection were explored. Extensive 1D 1H and 2D NMR measurements provided information for detailed metabolite assignment and quantification leading to possible metabolic markers discriminating resistance level in wheat subtypes. Quantification data is compared to results obtained using other published methods. Fusarium infection induced metabolic changes in different wheat varieties are discussed in the context of metabolic network and resistance. Quantitative metabolomics has been used for the investigation of the effect of targeted enzyme inhibition in cancer. In this work, the effect of 17 β -estradiol and ferulic acid on metabolism of ER+ breast cancer cells has been compared to their effect on ER- control cells. The effect of the inhibitors of carbonic anhydrase on the observed metabolic changes resulting from ER activation has also been determined. Metabolic profiles were studied using 1D and 2D metabolomic NMR experiments, combined with the identification and quantification of metabolites, and the annotation of the results is provided in the context of biochemical pathways.

Keywords: metabolic biomarkers, metabolic network, metabolomics, multivariate linear regression, NMR quantification, quantified metabolomics, spectral alignment

Procedia PDF Downloads 338
5 Understanding New Zealand’s 19th Century Timber Churches: Techniques in Extracting and Applying Underlying Procedural Rules

Authors: Samuel McLennan, Tane Moleta, Andre Brown, Marc Aurel Schnabel

Abstract:

The development of Ecclesiastical buildings within New Zealand has produced some unique design characteristics that take influence from both international styles and local building methods. What this research looks at is how procedural modelling can be used to define such common characteristics and understand how they are shared and developed within different examples of a similar architectural style. This will be achieved through the creation of procedural digital reconstructions of the various timber Gothic Churches built during the 19th century in the city of Wellington, New Zealand. ‘Procedural modelling’ is a digital modelling technique that has been growing in popularity, particularly within the game and film industry, as well as other fields such as industrial design and architecture. Such a design method entails the creation of a parametric ‘ruleset’ that can be easily adjusted to produce many variations of geometry, rather than a single geometry as is typically found in traditional CAD software. Key precedents within this area of digital heritage includes work by Haegler, Müller, and Gool, Nicholas Webb and Andre Brown, and most notably Mark Burry. What these precedents all share is how the forms of the reconstructed architecture have been generated using computational rules and an understanding of the architects’ geometric reasoning. This is also true within this research as Gothic architecture makes use of only a select range of forms (such as the pointed arch) that can be accurately replicated using the same standard geometric techniques originally used by the architect. The methodology of this research involves firstly establishing a sample group of similar buildings, documenting the existing samples, researching any lost samples to find evidence such as architectural plans, photos, and written descriptions, and then culminating all the findings into a single 3D procedural asset within the software ‘Houdini’. The end result will be an adjustable digital model that contains all the architectural components of the sample group, such as the various naves, buttresses, and windows. These components can then be selected and arranged to create visualisations of the sample group. Because timber gothic churches in New Zealand share many details between designs, the created collection of architectural components can also be used to approximate similar designs not included in the sample group, such as designs found beyond the Wellington Region. This creates an initial library of architectural components that can be further expanded on to encapsulate as wide of a sample size as desired. Such a methodology greatly improves upon the efficiency and adjustability of digital modelling compared to current practices found in digital heritage reconstruction. It also gives greater accuracy to speculative design, as a lack of evidence for lost structures can be approximated using components from still existing or better-documented examples. This research will also bring attention to the cultural significance these types of buildings have within the local area, addressing the public’s general unawareness of architectural history that is identified in the Wellington based research ‘Moving Images in Digital Heritage’ by Serdar Aydin et al.

Keywords: digital forensics, digital heritage, gothic architecture, Houdini, procedural modelling

Procedia PDF Downloads 131
4 Effects of Irrigation Applications during Post-Anthesis Period on Flower Development and Pyrethrin Accumulation in Pyrethrum

Authors: Dilnee D. Suraweera, Tim Groom, Brian Chung, Brendan Bond, Andrew Schipp, Marc E. Nicolas

Abstract:

Pyrethrum (Tanacetum cinerariifolium) is a perennial plant belongs to family Asteraceae. This is cultivated commercially for extraction of natural insecticide pyrethrins, which accumulates in their flower head achenes. Approximately 94% of the pyrethrins are produced within secretory ducts and trichomes of achenes of the mature pyrethrum flower. This is the most widely used botanical insecticide in the world and Australia is the current largest pyrethrum producer in the world. Rainfall in pyrethrum growing regions in Australia during pyrethrum flowering period, in late spring and early summer is significantly less. Due to lack of adequate soil moisture and under elevated temperature conditions during post-anthesis period, resulting in yield reductions. Therefore, understanding of yield responses of pyrethrum to irrigation is important for Pyrethrum as a commercial crop. Irrigation management has been identified as a key area of pyrethrum crop management strategies that could be manipulated to increase yield. Pyrethrum is a comparatively drought tolerant plant and it has some ability to survive in dry conditions due to deep rooting. But in dry areas and in dry seasons, the crop cannot reach to its full yield potential without adequate soil moisture. Therefore, irrigation is essential during the flowering period prevent crop water stress and maximise yield. Irrigation during the water deficit period results in an overall increased rate of water uptake and growth by the plant which is essential to achieve the maximum yield benefits from commercial crops. The effects of irrigation treatments applied at post-anthesis period on pyrethrum yield responses were studied in two irrigation methods. This was conducted in a first harvest commercial pyrethrum field in Waubra, Victoria, during 2012/2013 season. Drip irrigation and overhead sprinkler irrigation treatments applied during whole flowering period were compared with ‘rainfed’ treatment in relation to flower yield and pyrethrin yield responses. The results of this experiment showed that the application of 180mm of irrigation throughout the post-anthesis period, from early flowering stages to physiological maturity under drip irrigation treatment increased pyrethrin concentration by 32%, which combined with the 95 % increase in the flower yield to give a total pyrethrin yield increase of 157%, compared to the ‘rainfed’ treatment. In contrast to that overhead sprinkler irrigation treatment increased pyrethrin concentration by 19%, which combined with the 60 % increase in the flower yield to give a total pyrethrin yield increase of 91%, compared to the ‘rainfed’ treatment. Irrigation treatments applied throughout the post-anthesis period significantly increased flower yield as a result of enhancement of number of flowers and flower size. Irrigation provides adequate soil moisture for flower development in pyrethrum which slows the rate of flower development and increases the length of the flowering period, resulting in a delayed crop harvest (11 days) compared to the ‘rainfed’ treatment. Overall, irrigation has a major impact on pyrethrin accumulation which increases the rate and duration of pyrethrin accumulation resulting in higher pyrethrin yield per flower at physiological maturity. The findings of this study will be important for future yield predictions and to develop advanced agronomic strategies to maximise pyrethrin yield in pyrethrum.

Keywords: achene, drip irrigation, overhead irrigation, pyrethrin

Procedia PDF Downloads 409
3 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.

Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin

Procedia PDF Downloads 63
2 Effects of School Culture and Curriculum on Gifted Adolescent Moral, Social, and Emotional Development: A Longitudinal Study of Urban Charter Gifted and Talented Programs

Authors: Rebekah Granger Ellis, Pat J. Austin, Marc P. Bonis, Richard B. Speaker, Jr.

Abstract:

Using two psychometric instruments, this study examined social and emotional intelligence and moral judgment levels of more than 300 gifted and talented high school students enrolled in arts-integrated, academic acceleration, and creative arts charter schools in an ethnically diverse large city in the southeastern United States. Gifted and talented individuals possess distinguishable characteristics; these frequently appear as strengths, but often serious problems accompany them. Although many gifted adolescents thrive in their environments, some struggle in their school and community due to emotional intensity, motivation and achievement issues, lack of peers and isolation, identification problems, sensitivity to expectations and feelings, perfectionism, and other difficulties. These gifted students endure and survive in school rather than flourish. Gifted adolescents face special intrapersonal, interpersonal, and environmental problems. Furthermore, they experience greater levels of stress, disaffection, and isolation than non-gifted individuals due to their advanced cognitive abilities. Therefore, it is important to examine the long-term effects of participation in various gifted and talented programs on the socio-affective development of these adolescents. Numerous studies have researched moral, social, and emotional development in the areas of cognitive-developmental, psychoanalytic, and behavioral learning; however, in almost all cases, these three facets have been studied separately leading to many divergent theories. Additionally, various frameworks and models purporting to encourage the different socio-affective branches of development have been debated in curriculum theory, yet research is inconclusive on the effectiveness of these programs. Most often studied is the socio-affective domain, which includes development and regulation of emotions; empathy development; interpersonal relations and social behaviors; personal and gender identity construction; and moral development, thinking, and judgment. Examining development in these domains can provide insight into why some gifted and talented adolescents are not always successful in adulthood despite advanced IQ scores. Particularly whether emotional, social and moral capabilities of gifted and talented individuals are as advanced as their intellectual abilities and how these are related to each other. This mixed methods longitudinal study examined students in urban gifted and talented charter schools for (1) socio-affective development levels and (2) whether a particular environment encourages developmental growth. Research questions guiding the study: (1) How do academically and artistically gifted 10th and 11th grade students perform on psychological scales of social and emotional intelligence and moral judgment? Do they differ from the normative sample? Do gender differences exist among gifted students? (2) Do adolescents who attend distinctive gifted charter schools differ in developmental profiles? Students’ performances on psychometric instruments were compared over time and by program type. Assessing moral judgment (DIT-2) and socio-emotional intelligence (BarOn EQ-I: YV), participants took pre-, mid-, and post-tests during one academic school year. Quantitative differences in growth on these psychological scales (individuals and school-wide) were examined. If a school showed change, qualitative artifacts (culture, curricula, instructional methodology, stakeholder interviews) provided insight for environmental correlation.

Keywords: gifted and talented programs, moral judgment, social and emotional intelligence, socio-affective education

Procedia PDF Downloads 193
1 Impact of Simulated Brain Interstitial Fluid Flow on the Chemokine CXC-Chemokine-Ligand-12 Release From an Alginate-Based Hydrogel

Authors: Wiam El Kheir, Anais Dumais, Maude Beaudoin, Bernard Marcos, Nick Virgilio, Benoit Paquette, Nathalie Faucheux, Marc-Antoine Lauzon

Abstract:

The high infiltrative pattern of glioblastoma multiforme cells (GBM) is the main cause responsible for the actual standard treatments failure. The tumor high heterogeneity, the interstitial fluid flow (IFF) and chemokines guides GBM cells migration in the brain parenchyma resulting in tumor recurrence. Drug delivery systems emerged as an alternative approach to develop effective treatments for the disease. Some recent studies have proposed to harness the effect CXC-lchemokine-ligand-12 to direct and control the cancer cell migration through delivery system. However, the dynamics of the brain environment on the delivery system remains poorly understood. Nanoparticles (NPs) and hydrogels are known as good carriers for the encapsulation of different agents and control their release. We studied the release of CXCL12 (free or loaded into NPs) from an alginate-based hydrogel under static and indirect perfusion (IP) conditions. Under static conditions, the main phenomena driving CXCL12 release from the hydrogel was diffusion with the presence of strong interactions between the positively charged CXCL12 and the negatively charge alginate. CXCL12 release profiles were independent from the initial mass loadings. Afterwards, we demonstrated that the release could tuned by loading CXCL12 into Alginate/Chitosan-Nanoparticles (Alg/Chit-NPs) and embedded them into alginate-hydrogel. The initial burst release was substantially attenuated and the overall cumulative release percentages of 21%, 16% and 7% were observed for initial mass loadings of 0.07, 0.13 and 0.26 µg, respectively, suggesting stronger electrostatic interactions. Results were mathematically modeled based on Fick’s second law of diffusion framework developed previously to estimate the effective diffusion coefficient (Deff) and the mass transfer coefficient. Embedding the CXCL12 into NPs decreased the Deff an order of magnitude, which was coherent with experimental data. Thereafter, we developed an in-vitro 3D model that takes into consideration the convective contribution of the brain IFF to study CXCL12 release in an in-vitro microenvironment that mimics as faithfully as possible the human brain. From is unique design, the model also allowed us to understand the effect of IP on CXCL12 release in respect to time and space. Four flow rates (0.5, 3, 6.5 and 10 µL/min) which may increase CXCL12 release in-vivo depending on the tumor location were assessed. Under IP, cumulative percentages varying between 4.5-7.3%, 23-58.5%, 77.8-92.5% and 89.2-95.9% were released for the three initial mass loadings of 0.08, 0.16 and 0.33 µg, respectively. As the flow rate increase, IP culture conditions resulted in a higher release of CXCL12 compared to static conditions as the convection contribution became the main driving mass transport phenomena. Further, depending on the flow rate, IP had a direct impact on CXCL12 distribution within the simulated brain tissue, which illustrates the importance of developing such 3D in-vitro models to assess the efficiency of a delivery system targeting the brain. In future work, using this very model, we aim to understand the impact of the different phenomenon occurring on GBM cell behaviors in response to the resulting chemokine gradient subjected to various flow while allowing them to express their invasive characteristics in an in-vitro microenvironment that mimics the in-vivo brain parenchyma.

Keywords: 3D culture system, chemokines gradient, glioblastoma multiforme, kinetic release, mathematical modeling

Procedia PDF Downloads 85