Search results for: bar model method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31658

Search results for: bar model method

878 Hypersensitivity Reactions Following Intravenous Administration of Contrast Medium

Authors: Joanna Cydejko, Paulina Mika

Abstract:

Hypersensitivity reactions are side effects of medications that resemble an allergic reaction. Anaphylaxis is a generalized, severe allergic reaction of the body caused by exposure to a specific agent at a dose tolerated by a healthy body. The most common causes of anaphylaxis are food (about 70%), Hymenoptera venoms (22%), and medications (7%), despite detailed diagnostics in 1% of people, the cause of the anaphylactic reaction was not indicated. Contrast media are anaphylactic agents of unknown mechanism. Hypersensitivity reactions can occur with both immunological and non-immunological mechanisms. Symptoms of anaphylaxis occur within a few seconds to several minutes after exposure to the allergen. Contrast agents are chemical compounds that make it possible to visualize or improve the visibility of anatomical structures. In the diagnosis of computed tomography, the preparations currently used are derivatives of the triiodide benzene ring. Pharmacokinetic and pharmacodynamic properties, i.e., their osmolality, viscosity, low chemotoxicity and high hydrophilicity, have an impact on better tolerance of the substance by the patient's body. In MRI diagnostics, macrocyclic gadolinium contrast agents are administered during examinations. The aim of this study is to present the results of the number and severity of anaphylactic reactions that occurred in patients in all age groups undergoing diagnostic imaging with intravenous administration of contrast agents. In non-ionic iodine CT and in macrocyclic gadolinium MRI. A retrospective assessment of the number of adverse reactions after contrast administration was carried out on the basis of data from the Department of Radiology of the University Clinical Center in Gdańsk, and it was assessed whether their different physicochemical properties had an impact on the incidence of acute complications. Adverse reactions are divided according to the severity of the patient's condition and the diagnostic method used in a given patient. Complications following the administration of a contrast medium in the form of acute anaphylaxis accounted for less than 0.5% of all diagnostic procedures performed with the use of a contrast agent. In the analysis period from January to December 2022, 34,053 CT scans and 15,279 MRI examinations with the use of contrast medium were performed. The total number of acute complications was 21, of which 17 were complications of iodine-based contrast agents and 5 of gadolinium preparations. The introduction of state-of-the-art contrast formulations was an important step toward improving the safety and tolerability of contrast agents used in imaging. Currently, contrast agents administered to patients are considered to be one of the best-tolerated preparations used in medicine. However, like any drug, they can be responsible for the occurrence of adverse reactions resulting from their toxic effects. The increase in the number of imaging tests performed with the use of contrast agents has a direct impact on the number of adverse events associated with their administration. However, despite the low risk of anaphylaxis, this risk should not be marginalized. The growing threat associated with the mass performance of radiological procedures with the use of contrast agents forces the knowledge of the rules of conduct in the event of symptoms of hypersensitivity to these preparations.

Keywords: anaphylactic, contrast medium, diagnostic, medical imagine

Procedia PDF Downloads 66
877 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania

Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea

Abstract:

A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.

Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality

Procedia PDF Downloads 132
876 Electrohydrodynamic Patterning for Surface Enhanced Raman Scattering for Point-of-Care Diagnostics

Authors: J. J. Rickard, A. Belli, P. Goldberg Oppenheimer

Abstract:

Medical diagnostics, environmental monitoring, homeland security and forensics increasingly demand specific and field-deployable analytical technologies for quick point-of-care diagnostics. Although technological advancements have made optical methods well-suited for miniaturization, a highly-sensitive detection technique for minute sample volumes is required. Raman spectroscopy is a well-known analytical tool, but has very weak signals and hence is unsuitable for trace level analysis. Enhancement via localized optical fields (surface plasmons resonances) on nanoscale metallic materials generates huge signals in surface-enhanced Raman scattering (SERS), enabling single molecule detection. This enhancement can be tuned by manipulation of the surface roughness and architecture at the sub-micron level. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for SERS-based sensing devices. While most SERS substrates are manufactured by conventional lithographic methods, the development of a cost-effective approach to create nanostructured surfaces is a much sought-after goal in the SERS community. Here, a method is established to create controlled, self-organized, hierarchical nanostructures using electrohydrodynamic (HEHD) instabilities. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements. HEHD pattern formation enables the fabrication of multiscale 3D structured arrays as SERS-active platforms. Importantly, each of the HEHD-patterned individual structural units yield a considerable SERS enhancement. This enables each single unit to function as an isolated sensor. Each of the formed structures can be effectively tuned and tailored to provide high SERS enhancement, while arising from different HEHD morphologies. The HEHD fabrication of sub-micrometer architectures is straightforward and robust, providing an elegant route for high-throughput biological and chemical sensing. The superior detection properties and the ability to fabricate SERS substrates on the miniaturized scale, will facilitate the development of advanced and novel opto-fluidic devices, such as portable detection systems, and will offer numerous applications in biomedical diagnostics, forensics, ecological warfare and homeland security.

Keywords: hierarchical electrohydrodynamic patterning, medical diagnostics, point-of care devices, SERS

Procedia PDF Downloads 348
875 Mechanism of Veneer Colouring for Production of Multilaminar Veneer from Plantation-Grown Eucalyptus Globulus

Authors: Ngoc Nguyen

Abstract:

There is large plantation of Eucalyptus globulus established which has been grown to produce pulpwood. This resource is not suitable for the production of decorative products, principally due to low grades of wood and “dull” appearance but many trials have been already undertaken for the production of veneer and veneer-based engineered wood products, such as plywood and laminated veneer lumber (LVL). The manufacture of veneer-based products has been recently identified as an unprecedented opportunity to promote higher value utilisation of plantation resources. However, many uncertainties remain regarding the impacts of inferior wood quality of young plantation trees on product recovery and value, and with respect to optimal processing techniques. Moreover, the quality of veneer and veneer-based products is far from optimal as trees are young and have small diameters; and the veneers have the significant colour variation which affects to the added value of final products. Developing production methods which would enhance appearance of low-quality veneer would provide a great potential for the production of high-value wood products such as furniture, joinery, flooring and other appearance products. One of the methods of enhancing appearance of low quality veneer, developed in Italy, involves the production of multilaminar veneer, also named “reconstructed veneer”. An important stage of the multilaminar production is colouring the veneer which can be achieved by dyeing veneer with dyes of different colours depending on the type of appearance products, their design and market demand. Although veneer dyeing technology has been well advanced in Italy, it has been focused on poplar veneer from plantation which wood is characterized by low density, even colour, small amount of defects and high permeability. Conversely, the majority of plantation eucalypts have medium to high density, have a lot of defects, uneven colour and low permeability. Therefore, detailed study is required to develop dyeing methods suitable for colouring eucalypt veneers. Brown reactive dye is used for veneer colouring process. Veneers from sapwood and heartwood of two moisture content levels are used to conduct colouring experiments: green veneer and veneer dried to 12% MC. Prior to dyeing, all samples are treated. Both soaking (dipping) and vacuum pressure methods are used in the study to compare the results and select most efficient method for veneer dyeing. To date, the results of colour measurements by CIELAB colour system showed significant differences in the colour of the undyed veneers produced from heartwood part. The colour became moderately darker with increasing of Sodium chloride, compared to control samples according to the colour measurements. It is difficult to conclude a suitable dye solution used in the experiments at this stage as the variables such as dye concentration, dyeing temperature or dyeing time have not been done. The dye will be used with and without UV absorbent after all trials are completed using optimal parameters in colouring veneers.

Keywords: Eucalyptus globulus, veneer colouring/dyeing, multilaminar veneer, reactive dye

Procedia PDF Downloads 353
874 Evaluation of Rheological Properties, Anisotropic Shrinkage, and Heterogeneous Densification of Ceramic Materials during Liquid Phase Sintering by Numerical-Experimental Procedure

Authors: Hamed Yaghoubi, Esmaeil Salahi, Fateme Taati

Abstract:

The effective shear and bulk viscosity, as well as dynamic viscosity, describe the rheological properties of the ceramic body during the liquid phase sintering process. The rheological parameters depend on the physical and thermomechanical characteristics of the material such as relative density, temperature, grain size, and diffusion coefficient and activation energy. The main goal of this research is to acquire a comprehensive understanding of the response of an incompressible viscose ceramic material during liquid phase sintering process such as stress-strain relations, sintering and hydrostatic stress, the prediction of anisotropic shrinkage and heterogeneous densification as a function of sintering time by including the simultaneous influence of gravity field, and frictional force. After raw materials analysis, the standard hard porcelain mixture as a ceramic body was designed and prepared. Three different experimental configurations were designed including midpoint deflection, sinter bending, and free sintering samples. The numerical method for the ceramic specimens during the liquid phase sintering process are implemented in the CREEP user subroutine code in ABAQUS. The numerical-experimental procedure shows the anisotropic behavior, the complete difference in spatial displacement through three directions, the incompressibility for ceramic samples during the sintering process. The anisotropic shrinkage factor has been proposed to investigate the shrinkage anisotropy. It has been shown that the shrinkage along the normal axis of casting sample is about 1.5 times larger than that of casting direction, the gravitational force in pyroplastic deformation intensifies the shrinkage anisotropy more than the free sintering sample. The lowest and greatest equivalent creep strain occurs at the intermediate zone and around the central line of the midpoint distorted sample, respectively. In the sinter bending test sample, the equivalent creep strain approaches to the maximum near the contact area with refractory support. The inhomogeneity in Von-Misses, pressure, and principal stress intensifies the relative density non-uniformity in all samples, except in free sintering one. The symmetrical distribution of stress around the center of free sintering sample, cause to hinder the pyroplastic deformations. Densification results confirmed that the effective bulk viscosity was well-defined with relative density values. The stress analysis confirmed that the sintering stress is more than the hydrostatic stress from start to end of sintering time so, from both theoretically and experimentally point of view, the sintering process occurs completely.

Keywords: anisotropic shrinkage, ceramic material, liquid phase sintering process, rheological properties, numerical-experimental procedure

Procedia PDF Downloads 345
873 Blackcurrant-Associated Rhabdovirus: New Pathogen for Blackcurrants in the Baltic Sea Region

Authors: Gunta Resevica, Nikita Zrelovs, Ivars Silamikelis, Ieva Kalnciema, Helvijs Niedra, Gunārs Lācis, Toms Bartulsons, Inga Moročko-Bičevska, Arturs Stalažs, Kristīne Drevinska, Andris Zeltins, Ina Balke

Abstract:

Newly discovered viruses provide novel knowledge for basic phytovirus research, serve as tools for biotechnology and can be helpful in identification of epidemic outbreaks. Blackcurrant-associated rhabdovirus (BCaRV) have been discovered in USA germplasm collection samples from Russia and France. As it was reported in one accession originating from France it is unclear whether the material was already infected when it entered in the USA or it became infected while in collection in the USA. Due to that BCaRV was definite as non-EU viruses. According to ICTV classification BCaRV is representative of Blackcurrant betanucleorhabdovirus specie in genus Betanucleorhabdovirus (family Rhabdoviridae). Nevertheless, BCaRV impact on the host, transmission mechanisms and vectors are still unknown. In RNA-seq data pool from Ribes plants resistance gene study by high throughput sequencing (HTS) we observed differences between sample group gene transcript heat maps. Additional analysis of the whole data pool (total 393660492 of 150 bp long read pairs) by rnaSPAdes v 3.13.1 resulted into 14424 bases long contig with an average coverage of 684x with shared 99.5% identity to the previously reported first complete genome of BCaRV (MF543022.1) using EMBOSS Needle. This finding proved BCaRV presence in EU and indicated that it might be relevant pathogen. In this study leaf tissue from twelve asymptomatic blackcurrant cv. Mara Eglite plants (negatively tested for blackcurrant reversion virus (BRV)) from Dobele, Latvia (56°36'31.9"N, 23°18'13.6"E) was collected and used for total RNA isolation with RNeasy Plant Mini Kit with minor modifications, followed by plant rRNA removal by a RiboMinus Plant Kit for RNA-Seq. HTS libraries were prepared using MGI Easy RNA Directional Library Prep Set for 16 reactions to obtain 150 bp pair-end reads. Libraries were pooled, circularized and cleaned and sequenced on DNBSEQ-G400 using PE150 flow cell. Additionally, all samples were tested by RT-PCR, and amplicons were directly sequenced by Sanger-based method. The contig representing the genome of BCaRV isolate Mara Eglite was deposited at European Nucleotide Archive under accession number OU015520. Those findings indicate a second evidence on the presence of this particular virus in the EU and further research on BCaRV prevalence in Ribes from other geographical areas should be performed. As there are no information on BCaRV impact on the host this should be investigated, regarding the fact that mixed infections with BRV and nucleorhabdoviruses are reported.

Keywords: BCaRV, Betanucleorhabdovirus, Ribes, RNA-seq

Procedia PDF Downloads 188
872 Hegemonic Salaryman Masculinity: Case Study of Transitional Male Gender Roles in Today's Japan

Authors: D. Norton

Abstract:

This qualitative study focuses on the lived experience and displacement of young white-collar masculinities in Japan. In recent years, the salaryman lifestyle has undergone significant disruption - increased competition for regular employment, rise in non-regular structurings of labour across public/private sectors, and shifting role expectations within the home. Despite this, related scholarship hints at a continued reinforcement of the traditional male gender role - that the salaryman remains a key benchmark of Japanese masculine identity. For those in structural proximity to these more ‘normative’ performativities, interest lies their engagement with such narratives - how they make sense of their masculinity in response to stated changes. In light of the historical emphasis on labour and breadwinning logics, notions of respective security or precarity generated as a result remain unclear. Similarly, concern extends to developments within the private sphere - by what means young white-collar men construct ideas of singlehood and companionship according to traditional gender ideologies or more contemporary, flexible readings. The influence of these still-emergent status distinctions on the logics of the social group in question is yet to be explored in depth by gender scholars. This project, therefore, focuses on a salaryman archetype as hegemonic - its transformation amidst these changes and socialising mechanisms that continue to legitimate unequal gender hierarchies. For data collection, a series of ethnographic interviews were held over a period of 12 months with university-educated, white-collar male employees from both Osaka and the Greater Tokyo Area. Findings suggest a modern salaryman ideal reflecting both continuities and shifts within white-collar employment. Whilst receptive to more contemporary workplace practices, the narratives of those interviewed remain imbued with logics supporting patterns of internal hegemony. Regular/non-regular distinction emerged as the foremost variable for both material and discursive patterns of white-collar stratification, with variants of displacement for each social group. Despite the heightened valorisation of stable employment, regular workers articulated various concerns over a model of corporate masculinity seen to be incompatible with recent socioeconomic developments. Likewise, non-regular employees face detachment owing to a still-inflexible perception of their working masculinity as marginalized amidst economic precarity. In seeking to negotiate respective challenges, those interviewed demonstrated an engagement with various concurrent social changes that would often either accommodate, reinforce, or expand upon traditional role behaviours. Few of these narratives offered any notable transgression of said ideal, however, suggesting that within the spectre of white-collar employment in Japan for the near future, any substantive transformation of corporate masculinity remains dependant upon economic developments, less so the agency of those involved.

Keywords: gender ideologies, hegemonic masculinity, Japan, white-collar employment

Procedia PDF Downloads 131
871 Photoemission Momentum Microscopy of Graphene on Ir (111)

Authors: Anna V. Zaporozhchenko, Dmytro Kutnyakhov, Katherina Medjanik, Christian Tusche, Hans-Joachim Elmers, Olena Fedchenko, Sergey Chernov, Martin Ellguth, Sergej A. Nepijko, Gerd Schoenhense

Abstract:

Graphene reveals a unique electronic structure that predetermines many intriguing properties such as massless charge carriers, optical transparency and high velocity of fermions at the Fermi level, opening a wide horizon of future applications. Hence, a detailed investigation of the electronic structure of graphene is crucial. The method of choice is angular resolved photoelectron spectroscopy ARPES. Here we present experiments using time-of-flight (ToF) momentum microscopy, being an alternative way of ARPES using full-field imaging of the whole Brillouin zone (BZ) and simultaneous acquisition of up to several 100 energy slices. Unlike conventional ARPES, k-microscopy is not limited in simultaneous k-space access. We have recorded the whole first BZ of graphene on Ir(111) including all six Dirac cones. As excitation source we used synchrotron radiation from BESSY II (Berlin) at the U125-2 NIM, providing linearly polarized (both polarizations p- and s-) VUV radiation. The instrument uses a delay-line detector for single-particle detection up the 5 Mcps range and parallel energy detection via ToF recording. In this way, we gather a 3D data stack I(E,kx,ky) of the full valence electronic structure in approx. 20 mins. Band dispersion stacks were measured in the energy range of 14 eV up to 23 eV with steps of 1 eV. The linearly-dispersing graphene bands for all six K and K’ points were simultaneously recorded. We find clear features of hybridization with the substrate, in particular in the linear dichroism in the angular distribution (LDAD). Recording of the whole Brillouin zone of graphene/Ir(111) revealed new features. First, the intensity differences (i.e. the LDAD) are very sensitive to the interaction of graphene bands with substrate bands. Second, the dark corridors are investigated in detail for both, p- and s- polarized radiation. They appear as local distortions of photoelectron current distribution and are induced by quantum mechanical interference of graphene sublattices. The dark corridors are located in different areas of the 6 Dirac cones and show chirality behaviour with a mirror plane along vertical axis. Moreover, two out of six show an oval shape while the rest are more circular. It clearly indicates orientation dependence with respect to E vector of incident light. Third, a pattern of faint but very sharp lines is visible at energies around 22eV that strongly remind on Kikuchi lines in diffraction. In conclusion, the simultaneous study of all six Dirac cones is crucial for a complete understanding of dichroism phenomena and the dark corridor.

Keywords: band structure, graphene, momentum microscopy, LDAD

Procedia PDF Downloads 349
870 The Impact of Trade on Stock Market Integration of Emerging Markets

Authors: Anna M. Pretorius

Abstract:

The emerging markets category for portfolio investment was introduced in 1986 in an attempt to promote capital market development in less developed countries. Investors traditionally diversified their portfolios by investing in different developed markets. However, high growth opportunities forced investors to consider emerging markets as well. Examples include the rapid growth of the “Asian Tigers” during the 1980s, growth in Latin America during the 1990s and the increased interest in emerging markets during the global financial crisis. As such, portfolio flows to emerging markets have increased substantially. In 2002 7% of all equity allocations from advanced economies went to emerging markets; this increased to 20% in 2012. The stronger links between advanced and emerging markets led to increased synchronization of asset price movements. This increased level of stock market integration for emerging markets is confirmed by various empirical studies. Against the background of increased interest in emerging market assets and the increasing level of integration of emerging markets, this paper focuses on the determinants of stock market integration of emerging market countries. Various studies have linked the level of financial market integration with specific economic variables. These variables include: economic growth, local inflation, trade openness, local investment, budget surplus/ deficit, market capitalization, domestic bank credit, domestic institutional and legal environment and world interest rates. The aim of this study is to empirically investigate to what extent trade-related determinants have an impact on stock market integration. The panel data sample include data of 16 emerging market countries: Brazil, Chile, China, Colombia, Czech Republic, Hungary, India, Malaysia, Pakistan, Peru, Philippines, Poland, Russian Federation, South Africa, Thailand and Turkey for the period 1998-2011. The integration variable for each emerging stock market is calculated as the explanatory power of a multi-factor model. These factors are extracted from a large panel of global stock market returns. Trade related explanatory variables include: exports as percentage of GDP, imports as percentage of GDP and total trade as percentage of GDP. Other macroeconomic indicators – such as market capitalisation, the size of the budget deficit and the effectiveness of the regulation of the securities exchange – are included in the regressions as control variables. An initial analysis on a sample of developed stock markets could not identify any significant determinants of stock market integration. Thus the macroeconomic variables identified in the literature are much more significant in explaining stock market integration of emerging markets than stock market integration of developed markets. The three trade variables are all statistically significant at a 5% level. The market capitalisation variable is also significant while the regulation variable is only marginally significant. The global financial crisis has highlighted the urgency to better understand the link between the financial and real sectors of the economy. This paper comes to the important finding that, apart from the level of market capitalisation (as financial indicator), trade (representative of the real economy) is a significant determinant of stock market integration of countries not yet classified as developed economies.

Keywords: emerging markets, financial market integration, panel data, trade

Procedia PDF Downloads 308
869 Interacting with Multi-Scale Structures of Online Political Debates by Visualizing Phylomemies

Authors: Quentin Lobbe, David Chavalarias, Alexandre Delanoe

Abstract:

The ICT revolution has given birth to an unprecedented world of digital traces and has impacted a wide number of knowledge-driven domains such as science, education or policy making. Nowadays, we are daily fueled by unlimited flows of articles, blogs, messages, tweets, etc. The internet itself can thus be considered as an unsteady hyper-textual environment where websites emerge and expand every day. But there are structures inside knowledge. A given text can always be studied in relation to others or in light of a specific socio-cultural context. By way of their textual traces, human beings are calling each other out: hypertext citations, retweets, vocabulary similarity, etc. We are in fact the architects of a giant web of elements of knowledge whose structures and shapes convey their own information. The global shapes of these digital traces represent a source of collective knowledge and the question of their visualization remains an opened challenge. How can we explore, browse and interact with such shapes? In order to navigate across these growing constellations of words and texts, interdisciplinary innovations are emerging at the crossroad between fields of social and computational sciences. In particular, complex systems approaches make it now possible to reconstruct the hidden structures of textual knowledge by means of multi-scale objects of research such as semantic maps and phylomemies. The phylomemy reconstruction is a generic method related to the co-word analysis framework. Phylomemies aim to reveal the temporal dynamics of large corpora of textual contents by performing inter-temporal matching on extracted knowledge domains in order to identify their conceptual lineages. This study aims to address the question of visualizing the global shapes of online political discussions related to the French presidential and legislative elections of 2017. We aim to build phylomemies on top of a dedicated collection of thousands of French political tweets enriched with archived contemporary news web articles. Our goal is to reconstruct the temporal evolution of online debates fueled by each political community during the elections. To that end, we want to introduce an iterative data exploration methodology implemented and tested within the free software Gargantext. There we combine synchronic and diachronic axis of visualization to reveal the dynamics of our corpora of tweets and web pages as well as their inner syntagmatic and paradigmatic relationships. In doing so, we aim to provide researchers with innovative methodological means to explore online semantic landscapes in a collaborative and reflective way.

Keywords: online political debate, French election, hyper-text, phylomemy

Procedia PDF Downloads 190
868 Remote Sensing Applications in Identifying Opium Poppy: A Dual Approach to Food Security and Counter-Terrorism

Authors: Hadi Fadaei

Abstract:

The opium poppy plant, known for its significant role in the global drug trade, poses a dual threat to food security and national security. This paper explores the application of remote sensing technology to identify the spectral reflectance characteristics of the opium poppy, aiming to enhance monitoring efforts and inform policy decisions. The increasing prevalence of opium poppy cultivation, particularly in regions where food security is already compromised, necessitates a comprehensive understanding of its spatial distribution and growth patterns. Remote sensing offers a non-invasive and efficient means of collecting data on agricultural practices, enabling the identification of crop types and their health status. By analyzing the spectral reflectance of the opium poppy plant, we can differentiate it from other crops, thereby providing critical insights into its cultivation areas. This capability is essential for developing targeted interventions to mitigate the impacts of illicit opium production on food security and local economies. The methodology involves the use of advanced remote sensing techniques, including satellite imagery and aerial photography, to capture high-resolution spectral data. This data will be processed using sophisticated algorithms to extract relevant features that characterize the opium poppy's reflectance. The analysis will focus on identifying specific spectral signatures associated with the plant at various growth stages, which can be correlated with its physiological characteristics. The findings of this research are expected to contribute significantly to the understanding of opium poppy cultivation dynamics. By establishing a reliable method for detecting and mapping opium poppy fields, policymakers, and law enforcement agencies can enhance their efforts to combat illegal drug production. Furthermore, this research aims to highlight the implications of opium poppy cultivation on food security, particularly in regions where agricultural resources are limited and communities are vulnerable. In conclusion, the integration of remote sensing technology into the monitoring of opium poppy cultivation presents a promising approach to addressing the challenges posed by this plant. By identifying its spectral reflectance characteristics, we can develop effective strategies to mitigate its impact on food security and support counter-terrorism initiatives. This research not only aims to advance the field of remote sensing but also seeks to contribute to broader discussions on agricultural sustainability and security in the face of evolving threats. The outcomes of this study will provide valuable insights for stakeholders involved in food security, law enforcement, and agricultural policy, ultimately fostering a more secure and resilient future.

Keywords: opium poppy, remote sensing, spectral reflectance, food security, counter-terrorism

Procedia PDF Downloads 9
867 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness

Authors: Keda Li, Hong Hu

Abstract:

Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.

Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb

Procedia PDF Downloads 93
866 The Effect of Aerobics and Yogic Exercise on Selected Physiological and Psychological Variables of Middle-Aged Women

Authors: A. Pallavi, N. Vijay Mohan

Abstract:

A nation can be economically progressive only when the citizens have sufficient capacity to work efficiently to increase the productivity. So, good health must be regarded as a primary need of the community. This helps the growth and development of the body and the mind, which in turn leads to progress and prosperity of the nation. An optimum growth is a necessity for an efficient existence in a biologically adverse and economically competitive world. It is also necessary for the execution of daily routine work. Yoga is a method or a system for the complete development of the personality in a human being. It can be further elaborated as an all-around and complete development of the body, mind, morality, intellect and soul of a being. Sri Aurobindo defines yoga as 'a methodical effort towards self-perfection by the development of the potentialities in the individual.' Aerobic exercise as any activity that uses large muscle groups, can be maintained continuously, and is rhythmic I nature. It is a type of exercise that overloads the heart and lungs and causes them to work harder than at rest. The important idea behind aerobic exercise today, is to get up and get moving. There are more activities that ever to choose from, whether it is a new activity or an old one. Find something you enjoy doing that keeps our heart rate elevated for a continuous time period and get moving to a healthier life. Middle aged selected and served as the subjects for the purpose of this study. The selected subjects were in the age group of 30 to 40 years. By going through the literature and after consulting the experts in yoga and aerobic training, the investigator had chosen the variables which are specifically related to the middle-aged men. The selected physiological variables are pulse rate, diastolic blood pressure, systolic blood pressure; percent body fat and vital capacity. The selected psychological variables are job anxiety, occupational stress. The study was formulated as a random group design consisting of aerobic exercise and yogic exercises groups. The subjects (N=60) were at random divided into three equal groups of twenty middle-aged men each. The groups were assigned the names as follows: 1. Experimental group I- aerobic exercises group, 2. Experimental group II- yogic exercises, 3. Control group. All the groups were subjected to pre-test prior to the experimental treatment. The experimental groups participated in their respective duration of twenty-four weeks, six days in a week throughout the study. The various tests administered were: prior to training (pre-test), after twelfth week (second test) and twenty-fourth weeks (post-test) of the training schedule.

Keywords: pulse rate, diastolic blood pressure, systolic blood pressure; percent body fat and vital capacity, psychological variables, job anxiety, occupational stress, aerobic exercise, yogic exercise

Procedia PDF Downloads 448
865 A Novel Concept of Optical Immunosensor Based on High-Affinity Recombinant Protein Binders for Tailored Target-Specific Detection

Authors: Alena Semeradtova, Marcel Stofik, Lucie Mareckova, Petr Maly, Ondrej Stanek, Jan Maly

Abstract:

Recently, novel strategies based on so-called molecular evolution were shown to be effective for the production of various peptide ligand libraries with high affinities to molecular targets of interest comparable or even better than monoclonal antibodies. The major advantage of these peptide scaffolds is mainly their prevailing low molecular weight and simple structure. This study describes a new high-affinity binding molecules based immunesensor using a simple optical system for human serum albumin (HSA) detection as a model molecule. We present a comparison of two variants of recombinant binders based on albumin binding domain of the protein G (ABD) performed on micropatterned glass chip. Binding domains may be tailored to any specific target of interest by molecular evolution. Micropatterened glass chips were prepared using UV-photolithography on chromium sputtered glasses. Glass surface was modified by (3-aminopropyl)trietoxysilane and biotin-PEG-acid using EDC/NHS chemistry. Two variants of high-affinity binding molecules were used to detect target molecule. Firstly, a variant is based on ABD domain fused with TolA chain. This molecule is in vivo biotinylated and each molecule contains one molecule of biotin and one ABD domain. Secondly, the variant is ABD domain based on streptavidin molecule and contains four gaps for biotin and four ABD domains. These high-affinity molecules were immobilized to the chip surface via biotin-streptavidin chemistry. To eliminate nonspecific binding 1% bovine serum albumin (BSA) or 6% fetal bovine serum (FBS) were used in every step. For both variants range of measured concentrations of fluorescently labelled HSA was 0 – 30 µg/ml. As a control, we performed a simultaneous assay without high-affinity binding molecules. Fluorescent signal was measured using inverse fluorescent microscope Olympus IX 70 with COOL LED pE 4000 as a light source, related filters, and camera Retiga 2000R as a detector. The fluorescent signal from non-modified areas was substracted from the signal of the fluorescent areas. Results were presented in graphs showing the dependence of measured grayscale value on the log-scale of HSA concentration. For the TolA variant the limit of detection (LOD) of the optical immunosensor proposed in this study is calculated to be 0,20 µg/ml for HSA detection in 1% BSA and 0,24 µg/ml in 6% FBS. In the case of streptavidin-based molecule, it was 0,04 µg/ml and 0,07 µg/ml respectively. The dynamical range of the immunosensor was possible to estimate just in the case of TolA variant and it was calculated to be 0,49 – 3,75 µg/ml and 0,73-1,88 µg/ml respectively. In the case of the streptavidin-based the variant we didn´t reach the surface saturation even with the 480 ug/ml concentration and the upper value of dynamical range was not estimated. Lower value was calculated to be 0,14 µg/ml and 0,17 µg/ml respectively. Based on the obtained results, it´s clear that both variants are useful for creating the bio-recognizing layer on immunosensors. For this particular system, it is obvious that the variant based on streptavidin molecule is more useful for biosensing on glass planar surfaces. Immunosensors based on this variant would exhibit better limit of detection and wide dynamical range.

Keywords: high affinity binding molecules, human serum albumin, optical immunosensor, protein G, UV-photolitography

Procedia PDF Downloads 370
864 A Conceptual Framework of Integrated Evaluation Methodology for Aquaculture Lakes

Authors: Robby Y. Tallar, Nikodemus L., Yuri S., Jian P. Suen

Abstract:

Research in the subject of ecological water resources management is full of trivial questions addressed and it seems, today to be one branch of science that can strongly contribute to the study of complexity (physical, biological, ecological, socio-economic, environmental, and other aspects). Existing literature available on different facets of these studies, much of it is technical and targeted for specific users. This study offered the combination all aspects in evaluation methodology for aquaculture lakes with its paradigm refer to hierarchical theory and to the effects of spatial specific arrangement of an object into a space or local area. Therefore, the process in developing a conceptual framework represents the more integrated and related applicable concept from the grounded theory. A design of integrated evaluation methodology for aquaculture lakes is presented. The method is based on the identification of a series of attributes which can be used to describe status of aquaculture lakes using certain indicators from aquaculture water quality index (AWQI), aesthetic aquaculture lake index (AALI) and rapid appraisal for fisheries index (RAPFISH). The preliminary preparation could be accomplished as follows: first, the characterization of study area was undertaken at different spatial scales. Second, an inventory data as a core resource such as city master plan, water quality reports from environmental agency, and related government regulations. Third, ground-checking survey should be completed to validate the on-site condition of study area. In order to design an integrated evaluation methodology for aquaculture lakes, finally we integrated and developed rating scores system which called Integrated Aquaculture Lake Index (IALI).The development of IALI are reflecting a compromise all aspects and it responds the needs of concise information about the current status of aquaculture lakes by the comprehensive approach. IALI was elaborated as a decision aid tool for stakeholders to evaluate the impact and contribution of anthropogenic activities on the aquaculture lake’s environment. The conclusion was while there is no denying the fact that the aquaculture lakes are under great threat from the pressure of the increasing human activities, one must realize that no evaluation methodology for aquaculture lakes can succeed by keeping the pristine condition. The IALI developed in this work can be used as an effective, low-cost evaluation methodology of aquaculture lakes for developing countries. Because IALI emphasizes the simplicity and understandability as it must communicate to decision makers and the experts. Moreover, stakeholders need to be helped to perceive their lakes so that sites can be accepted and valued by local people. For this site of lake development, accessibility and planning designation of the site is of decisive importance: the local people want to know whether the lake condition is safe or whether it can be used.

Keywords: aesthetic value, AHP, aquaculture lakes, integrated lakes, RAPFISH

Procedia PDF Downloads 246
863 Ways to Prevent Increased Wear of the Drive Box Parts and the Central Drive of the Civil Aviation Turbo Engine Based on Tribology

Authors: Liudmila Shabalinskaya, Victor Golovanov, Liudmila Milinis, Sergey Loponos, Alexander Maslov, D. O. Frolov

Abstract:

The work is devoted to the rapid laboratory diagnosis of the condition of aircraft friction units, based on the application of the nondestructive testing method by analyzing the parameters of wear particles, or tribodiagnostics. The most important task of tribodiagnostics is to develop recommendations for the selection of more advanced designs, materials and lubricants based on data on wear processes for increasing the life and ensuring the safety of the operation of machines and mechanisms. The object of tribodiagnostics in this work are the tooth gears of the central drive and the gearboxes of the gas turbine engine of the civil aviation PS-90A type, in which rolling friction and sliding friction with slip occur. The main criterion for evaluating the technical state of lubricated friction units of a gas turbine engine is the intensity and rate of wear of the friction surfaces of the friction unit parts. When the engine is running, oil samples are taken and the state of the friction surfaces is evaluated according to the parameters of the wear particles contained in the oil sample, which carry important and detailed information about the wear processes in the engine transmission units. The parameters carrying this information include the concentration of wear particles and metals in the oil, the dispersion composition, the shape, the size ratio and the number of particles, the state of their surfaces, the presence in the oil of various mechanical impurities of non-metallic origin. Such a morphological analysis of wear particles has been introduced into the order of monitoring the status and diagnostics of various aircraft engines, including a gas turbine engine, since the type of wear characteristic of the central drive and the drive box is surface fatigue wear and the beginning of its development, accompanied by the formation of microcracks, leads to the formation of spherical, up to 10 μm in size, and in the aftermath of flocculent particles measuring 20-200 μm in size. Tribodiagnostics using the morphological analysis of wear particles includes the following techniques: ferrography, filtering, and computer analysis of the classification and counting of wear particles. Based on the analysis of several series of oil samples taken from the drive box of the engine during their operating time, a study was carried out of the processes of wear kinetics. Based on the results of the study and comparing the series of criteria for tribodiagnostics, wear state ratings and statistics of the results of morphological analysis, norms for the normal operating regime were developed. The study allowed to develop levels of wear state for friction surfaces of gearing and a 10-point rating system for estimating the likelihood of the occurrence of an increased wear mode and, accordingly, prevention of engine failures in flight.

Keywords: aviation, box of drives, morphological analysis, tribodiagnostics, tribology, ferrography, filtering, wear particle

Procedia PDF Downloads 268
862 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology

Authors: Amarendar Reddy Addula

Abstract:

Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.

Keywords: artificial intelligence, ethics & human rights issues, laws, international laws

Procedia PDF Downloads 99
861 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software

Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi

Abstract:

Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.

Keywords: climate change, GIS, interpolation, co-kriging

Procedia PDF Downloads 132
860 To Examine Perceptions and Associations of Shock Food Labelling and to Assess the Impact on Consumer Behaviour: A Quasi-Experimental Approach

Authors: Amy Heaps, Amy Burns, Una McMahon-Beattie

Abstract:

Shock and fear tactics have been used to encourage consumer behaviour change within the UK regarding lifestyle choices such as smoking and alcohol abuse, yet such measures have not been applied to food labels to encourage healthier purchasing decisions. Obesity levels are continuing to rise within the UK, despite efforts made by government and charitable bodies to encourage consumer behavioural changes, which will have a positive influence on their fat, salt, and sugar intake. We know that taking extreme measures to shock consumers into behavioural changes has worked previously; for example, the anti-smoking television adverts and new standardised cigarette and tobacco packaging have reduced the numbers of the UK adult population who smoke or encouraged those who are currently trying to quit. The USA has also introduced new front-of-pack labelling, which is clear, easy to read, and includes concise health warnings on products high in fat, salt, or sugar. This model has been successful, with consumers reducing purchases of products with these warning labels present. Therefore, investigating if shock labels would have an impact on UK consumer behaviour and purchasing decisions would help to fill the gap within this research field. This study aims to develop an understanding of consumer’s initial responses to shock advertising with an interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes and will achieve this through a mixed methodological approach taken with a sample size of 25 participants ages ranging from 22 and 60. Within this research, shock mock labels were developed, including a graphic image, health warning, and get-help information. These labels were made for products (available within the UK) with large market shares which were high in either fat, salt, or sugar. The use of online focus groups and mouse-tracking experiments results helped to develop an understanding of consumer’s initial responses to shock advertising with interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes. Preliminary results have shown that consumers believe that the use of graphic images, combined with a health warning, would encourage consumer behaviour change and influence their purchasing decisions regarding those products which are high in fat, salt and sugar. Preliminary main findings show that graphic mock shock labels may have an impact on consumer behaviour and purchasing decisions, which will, in turn, encourage healthier lifestyles. Focus group results show that 72% of participants indicated that these shock labels would have an impact on their purchasing decisions. During the mouse tracking trials, this increased to 80% of participants, showing that more exposure to shock labels may have a bigger impact on potential consumer behaviour and purchasing decision change. In conclusion, preliminary results indicate that graphic shock labels will impact consumer purchasing decisions. Findings allow for a deeper understanding of initial emotional responses to these graphic labels. However, more research is needed to test the longevity of these labels on consumer purchasing decisions, but this research exercise is demonstrably the foundation for future detailed work.

Keywords: consumer behavior, decision making, labelling legislation, purchasing decisions, shock advertising, shock labelling

Procedia PDF Downloads 71
859 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement

Authors: Brittany Richardson, Ying Wang

Abstract:

For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.

Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments

Procedia PDF Downloads 136
858 Women’s Experience of Managing Pre-Existing Lymphoedema during Pregnancy and the Early Postnatal Period

Authors: Kim Toyer, Belinda Thompson, Louise Koelmeyer

Abstract:

Lymphoedema is a chronic condition caused by dysfunction of the lymphatic system, which limits the drainage of fluid and tissue waste from the interstitial space of the affected body part. The normal physiological changes in pregnancy cause an increased load on a normal lymphatic system which can result in a transient lymphatic overload (oedema). The interaction between lymphoedema and pregnancy oedema is unclear. Women with pre-existing lymphoedema require accurate information and additional strategies to manage their lymphoedema during pregnancy. Currently, no resources are available to guide women or their healthcare providers with accurate advice and additional management strategies for coping with lymphoedema during pregnancy until they have recovered postnatally. This study explored the experiences of Australian women with pre-existing lymphoedema during recent pregnancy and the early postnatal period to determine how their usual lymphoedema management strategies were adapted and what were their additional or unmet needs. Interactions with their obstetric care providers, the hospital maternity services, and usual lymphoedema therapy services were detailed. Participants were sourced from several Australian lymphoedema community groups, including therapist networks. Opportunistic sampling is appropriate to explore this topic in a small target population as lymphoedema in women of childbearing age is uncommon, with prevalence data unavailable. Inclusion criteria were aged over 18 years, diagnosed with primary or secondary lymphoedema of the arm or leg, pregnant within the preceding ten years (since 2012), and had their pregnancy and postnatal care in Australia. Exclusion criteria were a diagnosis of lipedema and if unable to read or understand a reasonable level of English. A mixed-method qualitative design was used in two phases. This involved an online survey (REDCap platform) of the participants followed by online semi-structured interviews or focus groups to provide the transcript data for inductive thematic analysis to gain an in-depth understanding of issues raised. Women with well-managed pre-existing lymphoedema coped well with the additional oedema load of pregnancy; however, those with limited access to quality conservative care prior to pregnancy were found to be significantly impacted by pregnancy, including many reporting deterioration of their chronic lymphoedema. Misinformation and a lack of support increased fear and apprehension in planning and enjoying their pregnancy experience. Collaboration between maternity and lymphoedema therapy services did not happen despite study participants suggesting it. Helpful resources and unmet needs were identified in the recent Australian context to inform further research and the development of resources to assist women with lymphoedema who are considering or are pregnant and their supporters, including health care providers.

Keywords: lymphoedema, management strategies, pregnancy, qualitative

Procedia PDF Downloads 91
857 Co-Culture with Murine Stromal Cells Enhances the In-vitro Expansion of Hematopoietic Stem Cells in Response to Low Concentrations of Trans-Resveratrol

Authors: Mariyah Poonawala, Selvan Ravindran, Anuradha Vaidya

Abstract:

Despite much progress in understanding the regulatory factors and cytokines that support the maturation of the various cell lineages of the hematopoietic system, factors that govern the self-renewal and proliferation of hematopoietic stem cells (HSCs) is still a grey area of research. Hematopoietic stem cell transplantation (HSCT) has evolved over the years and gained tremendous importance in the treatment of both malignant and non-malignant diseases. However, factors such as graft rejection and multiple organ failure have challenged HSCT from time to time, underscoring the urgent need for development of milder processes for successful hematopoietic transplantation. An emerging concept in the field of stem cell biology states that the interactions between the bone-marrow micro-environment and the hematopoietic stem and progenitor cells is essential for regulation, maintenance, commitment and proliferation of stem cells. Understanding the role of mesenchymal stromal cells in modulating the functionality of HSCs is, therefore, an important area of research. Trans-resveratrol has been extensively studied for its various properties to combat and prevent cancer, diabetes and cardiovascular diseases etc. The aim of the present study was to understand the effect of trans-resveratrol on HSCs using single and co-culture systems. We have used KG1a cells since it is a well accepted hematopoietic stem cell model system. Our preliminary experiments showed that low concentrations of trans-resveratrol stimulated the HSCs to undergo proliferation whereas high concentrations of trans-resveratrol did not stimulate the cells to proliferate. We used a murine fibroblast cell line, M210B4, as a stromal feeder layer. On culturing the KG1a cells with M210B4 cells, we observed that the stimulatory as well as inhibitory effects of trans-resveratrol at low and high concentrations respectively, were enhanced. Our further experiments showed that low concentration of trans-resveratrol reduced the generation of reactive oxygen species (ROS) and nitric oxide (NO) whereas high concentrations increased the oxidative stress in KG1a cells. We speculated that perhaps the oxidative stress was imposing inhibitory effects at high concentration and the same was confirmed by performing an apoptotic assay. Furthermore, cell cycle analysis and growth kinetic experiments provided evidence that low concentration of trans-resveratrol reduced the doubling time of the cells. Our hypothesis is that perhaps at low concentration of trans-resveratrol the cells get pushed into the G0/G1 phase and re-enter the cell cycle resulting in their proliferation, whereas at high concentration the cells are perhaps arrested at G2/M phase or at cytokinesis and therefore undergo apoptosis. Liquid Chromatography-Quantitative-Time of Flight–Mass Spectroscopy (LC-Q-TOF MS) analyses indicated the presence of trans-resveratrol and its metabolite(s) in the supernatant of the co-cultured cells incubated with high concentration of trans-resveratrol. We conjecture that perhaps the metabolites of trans-resveratrol are responsible for the apoptosis observed at the high concentration. Our findings may shed light on the unsolved problems in the in vitro expansion of stem cells and may have implications in the ex vivo manipulation of HSCs for therapeutic purposes.

Keywords: co-culture system, hematopoietic micro-environment, KG1a cell line, M210B4 cell line, trans-resveratrol

Procedia PDF Downloads 261
856 Effect of Non-Thermal Plasma, Chitosan and Polymyxin B on Quorum Sensing Activity and Biofilm of Pseudomonas aeruginosa

Authors: Alena Cejkova, Martina Paldrychova, Jana Michailidu, Olga Matatkova, Jan Masak

Abstract:

Increasing the resistance of pathogenic microorganisms to many antibiotics is a serious threat to the treatment of infectious diseases and cleaning medical instruments. It should be added that the resistance of microbial populations growing in biofilms is often up to 1000 times higher compared to planktonic cells. Biofilm formation in a number of microorganisms is largely influenced by the quorum sensing regulatory mechanism. Finding external factors such as natural substances or physical processes that can interfere effectively with quorum sensing signal molecules should reduce the ability of the cell population to form biofilm and increase the effectiveness of antibiotics. The present work is devoted to the effect of chitosan as a representative of natural substances with anti-biofilm activity and non- thermal plasma (NTP) alone or in combination with polymyxin B on biofilm formation of Pseudomonas aeruginosa. Particular attention was paid to the influence of these agents on the level of quorum sensing signal molecules (acyl-homoserine lactones) during planktonic and biofilm cultivations. Opportunistic pathogenic strains of Pseudomonas aeruginosa (DBM 3081, DBM 3777, ATCC 10145, ATCC 15442) were used as model microorganisms. Cultivations of planktonic and biofilm populations in 96-well microtiter plates on horizontal shaker were used for determination of antibiotic and anti-biofilm activity of chitosan and polymyxin B. Biofilm-growing cells on titanium alloy, which is used for preparation of joint replacement, were exposed to non-thermal plasma generated by cometary corona with a metallic grid for 15 and 30 minutes. Cultivation followed in fresh LB medium with or without chitosan or polymyxin B for next 24 h. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Activity of N-acyl homoserine lactones (AHLs) compounds involved in the regulation of biofilm formation was determined using Agrobacterium tumefaciens strain harboring a traG::lacZ/traR reporter gene responsive to AHLs. The experiments showed that both chitosan and non-thermal plasma reduce the AHLs level and thus the biofilm formation and stability. The effectiveness of both agents was somewhat strain dependent. During the eradication of P. aeruginosa DBM 3081 biofilm on titanium alloy induced by chitosan (45 mg / l) there was an 80% decrease in AHLs. Applying chitosan or NTP on the P. aeruginosa DBM 3777 biofilm did not cause a significant decrease in AHLs, however, in combination with both (chitosan 55 mg / l and NTP 30 min), resulted in a 70% decrease in AHLs. Combined application of NTP and polymyxin B allowed reduce antibiotic concentration to achieve the same level of AHLs inhibition in P. aeruginosa ATCC 15442. The results shown that non-thermal plasma and chitosan have considerable potential for the eradication of highly resistant P. aeruginosa biofilms, for example on medical instruments or joint implants.

Keywords: anti-biofilm activity, chitosan, non-thermal plasma, opportunistic pathogens

Procedia PDF Downloads 204
855 Community Engagement: Experience from the SIREN Study in Sub-Saharan Africa

Authors: Arti Singh, Carolyn Jenkins, Oyedunni S. Arulogun, Mayowa O. Owolabi, Fred S. Sarfo, Bruce Ovbiagele, Enzinne Sylvia

Abstract:

Background: Stroke, the leading cause of adult-onset disability and the second leading cause of death, is a major public health concern particularly pertinent in Sub-Saharan Africa (SSA), where nearly 80% of all global stroke mortalities occur. The Stroke Investigative Research and Education Network (SIREN) seeks to comprehensively characterize the genomic, sociocultural, economic, and behavioral risk factors for stroke and to build effective teams for research to address and decrease the burden of stroke and other non communicable diseases in SSA. One of the first steps to address this goal was to effectively engage the communities that suffer the high burden of disease in SSA. This study describes how the SIREN project engaged six sites in Ghana and Nigeria over the past three years, describing the community engagement activities that have arisen since inception. Aim: The aim of community engagement (CE) within SIREN is to elucidate information about knowledge, attitudes, beliefs, and practices (KABP) about stroke and its risk factors from individuals of African ancestry in SSA, and to educate the community about stroke and ways to decrease disabilities and deaths from stroke using socioculturally appropriate messaging and messengers. Methods: Community Advisory Board (CABs), Focus Group Discussions (FGDs) and community outreach programs. Results: 27 FGDs with 168 participants including community heads, religious leaders, health professionals and individuals with stroke among others, were conducted, and over 60 CE outreaches have been conducted within the SIREN performance sites. Over 5,900 individuals have received education on cardiovascular risk factors and about 5,000 have been screened for cardiovascular risk factors during the outreaches. FGDs and outreach programs indicate that knowledge of stroke, as well as risk factors and follow-up evidence-based care is limited and often late. Other findings include: 1) Most recognize hypertension as a major risk factor for stroke. 2) About 50% report that stroke is hereditary and about 20% do not know organs affected by stroke. 3) More than 95% willing to participate in genetic testing research and about 85% willing to pay for testing and recommend the test to others. 4) Almost all indicated that genetic testing could help health providers better treat stroke and help scientists better understand the causes of stroke. The CABs provided stakeholder input into SIREN activities and facilitated collaborations among investigators, community members and stakeholders. Conclusion: The CE core within SIREN is a first-of-its kind public outreach engagement initiative to evaluate and address perceptions about stroke and genomics by patients, caregivers, and local leaders in SSA and has implications as a model for assessment in other high-stroke risk populations. SIREN’s CE program uses best practices to build capacity for community-engaged research, accelerate integration of research findings into practice and strengthen dynamic community-academic partnerships within our communities. CE has had several major successes over the past three years including our multi-site collaboration examining the KABP about stroke (symptoms, risk factors, burden) and genetic testing across SSA.

Keywords: community advisory board, community engagement, focus groups, outreach, SSA, stroke

Procedia PDF Downloads 435
854 Investigating the Thermal Comfort Properties of Mohair Fabrics

Authors: Adine Gericke, Jiri Militky, Mohanapriya Venkataraman

Abstract:

Mohair, obtained from the Angora goat, is a luxury fiber and recognized as one of the best quality natural fibers. Expansion of the use of mohair into technical and functional textile products necessitates the need for a better understanding of how the use of mohair in fabrics will impact on its thermo-physiological comfort related properties. Despite its popularity, very little information is available on the quantification of the thermal and moisture management properties of mohair fabrics. This study investigated the effect of fibrous matter composition and fabric structural parameters on conductive and convective heat transfers to attain more information on the thermal comfort properties of mohair fabrics. Dry heat transfer through textiles may involve conduction through the fibrous phase, radiation through fabric interstices and convection of air within the structure. Factors that play a major role in heat transfer by conduction are fabric areal density (g/m2) and derived quantities such as cover factor and porosity. Convective heat transfer through fabrics is found in environmental conditions where there is wind-flow or the object is moving (e.g. running or walking). The thermal comfort properties of mohair fibers were objectively evaluated firstly in comparison with other textile fibers and secondly in a variety of fabric structures. Two sample sets were developed for this purpose, with fibre content, yarn structure and fabric design as main variables. SEM and microscopic images were obtained to closely examine the physical structures of the fibers and fabrics. Thermal comfort properties such as thermal resistance and thermal conductivity, as well as fabric thickness, were measured on the well-known Alambeta test instrument. Clothing insulation (clo) was calculated from the above. The thermal properties of fabrics under heat convection was evaluated using a laboratory model device developed at the Technical University of Liberec (referred to as the TP2-instrument). The effects of the different variables on fabric thermal comfort properties were analyzed statistically using TIBCO Statistica Software. The results showed that fabric structural properties, specifically sample thickness, played a significant role in determining the thermal comfort properties of the fabrics tested. It was found that regarding thermal resistance related to conductive heat flow, the effect of fiber type was not always statistically significant, probably as a result of the amount of trapped air within the fabric structure. The very low thermal conductivity of air, compared to that of the fibers, had a significant influence on the total conductivity and thermal resistance of the samples. This was confirmed by the high correlation of these factors with sample thickness. Regarding convective heat flow, the most important factor influencing the ability of the fabric to allow dry heat to move through the structure, was again fabric thickness. However, it would be wrong to totally disregard the effect of fiber composition on the thermal resistance of textile fabrics. In this study, the samples containing mohair or mohair/wool were consistently thicker than the others even though weaving parameters were kept constant. This can be ascribed to the physical properties of the mohair fibers that renders it exceptionally well towards trapping air among fibers (in a yarn) as well as among yarns (inside a fabric structure). The thicker structures trap more air to provide higher thermal insulation, but also prevent the free flow of air that allow thermal convection.

Keywords: mohair fabrics, convective heat transfer, thermal comfort properties, thermal resistance

Procedia PDF Downloads 150
853 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest

Authors: Peter Baji

Abstract:

In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.

Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study

Procedia PDF Downloads 204
852 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions

Authors: Pirta Palola, Richard Bailey, Lisa Wedding

Abstract:

Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.

Keywords: economics of biodiversity, environmental valuation, natural capital, value function

Procedia PDF Downloads 198
851 Poly(Methyl Methacrylate) Degradation Products and Its in vitro Cytotoxicity Evaluation in NIH3T3 Cells

Authors: Lesly Y Carmona-Sarabia, Luisa Barraza-Vergara, Vilmalí López-Mejías, Wandaliz Torres-García, Maribella Domenech-Garcia, Madeline Torres-Lugo

Abstract:

Biosensors are used in many applications providing real-time monitoring to treat long-term conditions. Thus, understanding the physicochemical properties and biological side effects on the skin of polymers (e. g., poly(methyl methacrylate), PMMA) employed in the fabrication of wearable biosensors is crucial for the selection of manufacturing materials within this field. The PMMA (hydrophobic and thermoplastic polymer) is commonly employed as a coating material or substrate in the fabrication of wearable devices. The cytotoxicityof PMMA (including residual monomers or degradation products) on the skin, in terms of cells and tissue, is required to prevent possible adverse effects (cell death, skin reactions, sensitization) on human health. Within this work, accelerated aging of PMMA (Mw ~ 15000) through thermal and photochemical degradation was under-taken. The accelerated aging of PMMA was carried out by thermal (200°C, 1h) and photochemical degradation (UV-Vis, 8-15d) adapted employing ISO protocols (ISO-10993-12, ISO-4892-1:2016, ISO-877-1:2009, ISO-188: 2011). In addition, in vitro cytotoxicity evaluation of PMMA degradation products was performed using NIH3T3 fibroblast cells to assess the response of skin tissues (in terms of cell viability) exposed with polymers utilized to manufacture wearable biosensors, such as PMMA. The PMMA (Mw ~ 15000) before and after accelerated aging experiments was characterized by thermal gravimetric analysis (TGA), differential scanning calorimetric (DSC), powder X-ray diffractogram (PXRD), and scanning electron microscopy-energy dispersive spectroscopy (SEM-EDS) to determine and verify the successful degradation of this polymer under the specific conditions previously mention. The degradation products were characterized through nuclear magnetic resonance (NMR) to identify possible byproducts generated after the accelerated aging. Results demonstrated a percentage (%) weight loss between 1.5-2.2% (TGA thermographs) for PMMA after accelerated aging. The EDS elemental analysis reveals a 1.32 wt.% loss of carbon for PMMA after thermal degradation. These results might be associated with the amount (%) of PMMA degrade after the accelerated aging experiments. Furthermore, from the thermal degradation products was detected the presence of the monomer and methyl formate (low concentrations) and a low molecular weight radical (·COOCH3) in higher concentrations by NMR. In the photodegradation products, methyl formate was detected in higher concentrations. These results agree with the proposed thermal or photochemical degradation mechanisms found in the literature.1,2 Finally, significant cytotoxicity on the NIH3T3 cells was obtained for the thermal and photochemical degradation products. A decrease in cell viability by > 90% (stock solutions) was observed. It is proposed that the presence of byproducts (e.g. methyl formate or radicals such as ·COOCH₃) from the PMMA degradation might be responsible for the cytotoxicity observed in the NIH3T3 fibroblast cells. Additionally, experiments using skin models will be employed to compare with the NIH3T3 fibroblast cells model.

Keywords: biosensors, polymer, skin irritation, degradation products, cell viability

Procedia PDF Downloads 145
850 Recovery of Food Waste: Production of Dog Food

Authors: K. Nazan Turhan, Tuğçe Ersan

Abstract:

The population of the world is approximately 8 billion, and it increases uncontrollably and irrepressibly, leading to an increase in consumption. This situation causes crucial problems, and food waste is one of these. The Food and Agriculture Organization of the United Nations (FAO) defines food waste as the discarding or alternative utilization of food that is safe and nutritious for the consumption of humans along the entire food supply chain, from primary production to end household consumer level. In addition, according to the estimation of FAO, one-third of all food produced for human consumption is lost or wasted worldwide every year. Wasting food endangers natural resources and causes hunger. For instance, excessive amounts of food waste cause greenhouse gas emissions, contributing to global warming. Therefore, waste management has been gaining significance in the last few decades at both local and global levels due to the expected scarcity of resources for the increasing population of the world. There are several ways to recover food waste. According to the United States Environmental Protection Agency’s Food Recovery Hierarchy, food waste recovery ways are source reduction, feeding hungry people, feeding animals, industrial uses, composting, and landfill/incineration from the most preferred to the least preferred, respectively. Bioethanol, biodiesel, biogas, agricultural fertilizer and animal feed can be obtained from food waste that is generated by different food industries. In this project, feeding animals was selected as a food waste recovery method and food waste of a plant was used to provide ingredient uniformity. Grasshoppers were used as a protein source. In other words, the project was performed to develop a dog food product by recovery of the plant’s food waste after following some steps. The collected food waste and purchased grasshoppers were sterilized, dried and pulverized. Then, they were all mixed with 60 g agar-agar solution (4%w/v). 3 different aromas were added, separately to the samples to enhance flavour quality. Since there are differences in the required amounts of different species of dogs, fulfilling all nutritional needs is one of the problems. In other words, there is a wide range of nutritional needs in terms of carbohydrates, protein, fat, sodium, calcium, and so on. Furthermore, the requirements differ depending on age, gender, weight, height, and species. Therefore, the product that was developed contains average amounts of each substance so as not to cause any deficiency or surplus. On the other hand, it contains more protein than similar products in the market. The product was evaluated in terms of contamination and nutritional content. For contamination risk, detection of E. coli and Salmonella experiments were performed, and the results were negative. For the nutritional value test, protein content analysis was done. The protein contents of different samples vary between 33.68% and 26.07%. In addition, water activity analysis was performed, and the water activity (aw) values of different samples ranged between 0.2456 and 0.4145.

Keywords: food waste, dog food, animal nutrition, food waste recovery

Procedia PDF Downloads 70
849 Study Habits and Level of Difficulty Encountered by Maltese Students Studying Biology Advanced Level Topics

Authors: Marthese Azzopardi, Liberato Camilleri

Abstract:

This research was performed to investigate the study habits and level of difficulty perceived by post-secondary students in Biology at Advanced-level topics after completing their first year of study. At the end of a two-year ‘sixth form’ course, Maltese students sit for the Matriculation and Secondary Education Certificate (MATSEC) Advanced-level biology exam as a requirement to pursue science-related studies at the University of Malta. The sample was composed of 23 students (16 taking Chemistry and seven taking some ‘Other’ subject at the Advanced Level). The cohort comprised seven males and 16 females. A questionnaire constructed by the authors, was answered anonymously during the last lecture at the end of the first year of study, in May 2016. The Chi square test revealed that gender plays no effect on the various study habits (c2 (6) = 5.873, p = 0.438). ‘Reading both notes and textbooks’ was the most common method adopted by males (71.4%), whereas ‘Writing notes on each topic’ was that mostly used by females (81.3%). The Mann-Whitney U test showed no significant difference in the study habits of students and the mean assessment mark obtained at the end of the first year course (p = 0.231). Statistical difference was found with the One-ANOVA test when comparing the mean assessment mark obtained at the end of the first year course when students are clustered by their Secondary Education Certificate (SEC) grade (p < 0.001). Those obtaining a SEC grade of 2 and 3 got the highest mean assessment of 68.33% and 66.9%, respectively [SEC grading is 1-7, where 1 is the highest]. The Friedman test was used to compare the mean difficulty rating scores provided for the difficulty of each topic. The mean difficulty rating score ranges from 1 to 4, where the larger the mean rating score, the higher the difficulty. When considering the whole group of students, nine topics out of 21 were perceived as significantly more difficult than the other topics. Protein synthesis, DNA Replication and Biomolecules were the most difficult, in that order. The Mann-Whitney U test revealed that the perceived level of difficulty in comprehending Biomolecules is significantly lower for students taking Chemistry compared to those not choosing the subject (p = 0.018). Protein Synthesis was claimed as the most difficult by Chemistry students and Biomolecules by those not studying Chemistry. DNA Replication was the second most difficult topic perceived by both groups. The Mann-Whitney U test was used to examine the effect of gender on the perceived level of difficulty in comprehending various topics. It was found that females have significantly more difficulty in comprehending Biomolecules than males (p=0.039). Protein synthesis was perceived as the most difficult topic by males (mean difficulty rating score = 3.14), while Biomolecules, DNA Replication and Protein synthesis were of equal difficulty for females (mean difficulty rating score = 3.00). Males and females perceived DNA Replication as equally difficult (mean difficulty rating score = 3.00). Discovering the students’ study habits and perceived level of difficulty of specific topics is vital for the lecturer to offer guidance that leads to higher academic achievement.

Keywords: biology, perceived difficulty, post-secondary, study habits

Procedia PDF Downloads 192