Search results for: nonlinear Shannon limit
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2903

Search results for: nonlinear Shannon limit

53 Evaluation of Forensic Pathology Practice Outside Germany – Experiences From 20 Years of Second Look Autopsies in Cooperation with the Institute of Legal Medicine Munich

Authors: Michael Josef Schwerer, Oliver Peschel

Abstract:

Background: The sense and purpose of forensic postmortem examinations are undoubtedly the same in Institutes of Legal Medicine all over the world. Cause and manner of death must be determined, persons responsible for unnatural death must be brought to justice, and accidents demand changes in the respective scenarios to avoid future mishaps. The latter particularly concerns aircraft accidents, not only regarding consequences from criminal or civil law but also in pursuance of the International Civil Aviation Authority’s regulations, which demand lessons from mishap investigations to improve flight safety. Irrespective of the distinct circumstances of a given casualty or the respective questions in subsequent death investigations, a forensic autopsy is the basis for all further casework, the clue to otherwise hidden solutions, and the crucial limitation for final success when not all possible findings have been properly collected. This also implies that the targeted work of police forces and expert witnesses strongly depends on the quality of forensic pathology practice. Deadly events in foreign countries, which lead to investigations not only abroad but also in Germany, can be challenging in this context. Frequently, second-look autopsies after the repatriation of the deceased to Germany are requested by the legal authorities to ensure proper and profound documentation of all relevant findings. Aims and Methods: To validate forensic postmortem practice abroad, a retrospective study using the findings in the corresponding second-look autopsies in the Institute of Legal Medicine Munich over the last 20 years was carried out. New findings unreported in the previous autopsy were recorded and judged for their relevance to solving the respective case. Further, the condition of the corpse at the time of the second autopsy was rated to discuss artifacts mimicking evidence or the possibility of lost findings resulting from, e.g., decomposition. Recommendations for future handling of death cases abroad and efficient autopsy practice were pursued. Results and Discussion: Our re-evaluation confirmed a high quality of autopsy practice abroad in the vast majority of cases. However, in some casework, incomplete documentation of pathology findings was revealed along with either insufficient or misconducted dissection of organs. Further, some of the bodies showed missing parts of some organs, most probably resulting from sampling for histology studies during the first postmortem. For the aeromedical evaluation of a decedent’s health status prior to an aviation mishap, particularly lost or obscured findings in the heart, lungs, and brain impeded expert testimony. Moreover, incomplete fixation of the body or body parts for repatriation was seen in several cases. This particularly involved previously dissected organs deposited back into the body cavities at the end of the first autopsy. Conclusions and Recommendations: Detailed preparation in the first forensic autopsy avoids the necessity of a second-look postmortem in the majority of cases. To limit decomposition changes during repatriation from abroad, special care must be taken to include pre-dissected organs in the chemical fixation process, particularly when they are separated from the blood vessels and just deposited back into the body cavities.

Keywords: autopsy practice, second-look autopsy, retrospective study, quality standards, decomposition changes, repatriation

Procedia PDF Downloads 24
52 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 48
51 The Development of User Behavior in Urban Regeneration Areas by Utilizing the Floating Population Data

Authors: Jung-Hun Cho, Tae-Heon Moon, Sun-Young Heo

Abstract:

A lot of urban problems, caused by urbanization and industrialization, have occurred around the world. In particular, the creation of satellite towns, which was attributed to the explicit expansion of the city, has led to the traffic problems and the hollowization of old towns, raising the necessity of urban regeneration in old towns along with the aging of existing urban infrastructure. To select urban regeneration priority regions for the strategic execution of urban regeneration in Korea, the number of population, the number of businesses, and deterioration degree were chosen as standards. Existing standards had a limit in coping with solving urban problems fundamentally and rapidly changing reality. Therefore, it was necessary to add new indicators that can reflect the decline in relevant cities and conditions. In this regard, this study selected Busan Metropolitan City, Korea as the target area as a leading city, where urban regeneration such as an international port city has been activated like Yokohama, Japan. Prior to setting the urban regeneration priority region, the conditions of reality should be reflected because uniform and uncharacterized projects have been implemented without a quantitative analysis about population behavior within the region. For this reason, this study conducted a characterization analysis and type classification, based on the user behaviors by using representative floating population of the big data, which is a hot issue all over the society in recent days. The target areas were analyzed in this study. While 23 regions were classified as three types in existing Busan Metropolitan City urban regeneration priority region, 23 regions were classified as four types in existing Busan Metropolitan City urban regeneration priority region in terms of the type classification on the basis of user behaviors. Four types were classified as follows; type (Ⅰ) of young people - morning type, Type (Ⅱ) of the old and middle-aged- general type with sharp floating population, type (Ⅲ) of the old and middle aged-24hour-type, and type (Ⅳ) of the old and middle aged with less floating population. Characteristics were shown in each region of four types, and the study results of user behaviors were different from those of existing urban regeneration priority region. According to the results, in type (Ⅰ) young people were the majority around the existing old built-up area, where floating population at dawn is four times more than in other areas. In Type (Ⅱ), there were many old and middle-aged people around the existing built-up area and general neighborhoods, where the average floating population was more than in other areas due to commuting, while in type (Ⅲ), there was no change in the floating population throughout 24 hours, although there were many old and middle aged people in population around the existing general neighborhoods. Type (Ⅳ) includes existing economy-based type, central built-up area type, and general neighborhood type, where old and middle aged people were the majority as a general type of commuting with less floating population. Unlike existing urban regeneration priority region, these types were sub-divided according to types, and in this study, approach methods and basic orientations of urban regeneration were set to reflect the reality to a certain degree including the indicators of effective floating population to identify the dynamic activity of urban areas and existing regeneration priority areas in connection with urban regeneration projects by regions. Therefore, it is possible to make effective urban plans through offering the substantial ground by utilizing scientific and quantitative data. To induce more realistic and effective regeneration projects, the regeneration projects tailored to the present local conditions should be developed by reflecting the present conditions on the formulation of urban regeneration strategic plans.

Keywords: floating population, big data, urban regeneration, urban regeneration priority region, type classification

Procedia PDF Downloads 180
50 Enhancing Seismic Resilience in Urban Environments

Authors: Beatriz González-rodrigo, Diego Hidalgo-leiva, Omar Flores, Claudia Germoso, Maribel Jiménez-martínez, Laura Navas-sánchez, Belén Orta, Nicola Tarque, Orlando Hernández- Rubio, Miguel Marchamalo, Juan Gregorio Rejas, Belén Benito-oterino

Abstract:

Cities facing seismic hazard necessitate detailed risk assessments for effective urban planning and vulnerability identification, ensuring the safety and sustainability of urban infrastructure. Comprehensive studies involving seismic hazard, vulnerability, and exposure evaluations are pivotal for estimating potential losses and guiding proactive measures against seismic events. However, broad-scale traditional risk studies limit consideration of specific local threats and identify vulnerable housing within a structural typology. Achieving precise results at neighbourhood levels demands higher resolution seismic hazard exposure, and vulnerability studies. This research aims to bolster sustainability and safety against seismic disasters in three Central American and Caribbean capitals. It integrates geospatial techniques and artificial intelligence into seismic risk studies, proposing cost-effective methods for exposure data collection and damage prediction. The methodology relies on prior seismic threat studies in pilot zones, utilizing existing exposure and vulnerability data in the region. Emphasizing detailed building attributes enables the consideration of behaviour modifiers affecting seismic response. The approach aims to generate detailed risk scenarios, facilitating prioritization of preventive actions pre-, during, and post-seismic events, enhancing decision-making certainty. Detailed risk scenarios necessitate substantial investment in fieldwork, training, research, and methodology development. Regional cooperation becomes crucial given similar seismic threats, urban planning, and construction systems among involved countries. The outcomes hold significance for emergency planning and national and regional construction regulations. The success of this methodology depends on cooperation, investment, and innovative approaches, offering insights and lessons applicable to regions facing moderate seismic threats with vulnerable constructions. Thus, this framework aims to fortify resilience in seismic-prone areas and serves as a reference for global urban planning and disaster management strategies. In conclusion, this research proposes a comprehensive framework for seismic risk assessment in high-risk urban areas, emphasizing detailed studies at finer resolutions for precise vulnerability evaluations. The approach integrates regional cooperation, geospatial technologies, and adaptive fragility curve adjustments to enhance risk assessment accuracy, guiding effective mitigation strategies and emergency management plans.

Keywords: assessment, behaviour modifiers, emergency management, mitigation strategies, resilience, vulnerability

Procedia PDF Downloads 39
49 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers

Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang

Abstract:

In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.

Keywords: centrality, patent coupling network, patent influence, social network analysis

Procedia PDF Downloads 24
48 Absorptive Capabilities in the Development of Biopharmaceutical Industry: The Case of Bioprocess Development and Research Unit, National Polytechnic Institute

Authors: Ana L. Sánchez Regla, Igor A. Rivera González, María del Pilar Monserrat Pérez Hernández

Abstract:

The ability of an organization to identify and get useful information from external sources, assimilate it, transform and apply to generate products or services with added value is called absorptive capacity. Absorptive capabilities contribute to have market opportunities to firms and get a leader position with respect to others competitors. The Bioprocess Development and Research Unit (UDIBI) is a Research and Development (R&D) laboratory that belongs to the National Polytechnic Institute (IPN), which is a higher education institute in Mexico. The UDIBI was created with the purpose of carrying out R and D activities for the Transferon®, a biopharmaceutical product developed and patented by IPN. The evolution of competence and scientific and technological platform made UDIBI expand its scope by providing technological services (preclínical studies and bio-compatibility evaluation) to the national pharmaceutical industry and biopharmaceutical industry. The relevance of this study is that those industries are classified as high scientific and technological intensity, and yet, after a review of the state of the art, there is only one study of absorption capabilities in biopharmaceutical industry with a similar scope to this research; in the case of Mexico, there is none. In addition to this, UDIBI belongs to a public university and its operation does not depend on the federal budget, but on the income generated by its external technological services. This fact represents a highly remarkable case in Mexico's public higher education context. This current doctoral research (2015-2019) is contextualized within a case study, its main objective is to identify and analyze the absorptive capabilities that characterise the UDIBI that allows it had become in a one of two third authorized laboratory by the sanitary authority in Mexico for developed bio-comparability studies to bio-pharmaceutical products. The development of this work in the field is divided into two phases. In a first phase, 15 interviews were conducted with the UDIBI personnel, covering management levels, heads of services, project leaders and laboratory personnel. These interviews were structured under a questionnaire, which was designed to integrate open questions and to a lesser extent, others, whose answers would be answered on a Likert-type rating scale. From the information obtained in this phase, a scientific article was made (in review and a proposal of presentation was submitted in different academic forums. A second stage will be made from the conduct of an ethnographic study within this organization under study that will last about 3 months. On the other hand, it is intended to carry out interviews with external actors around the UDIBI (suppliers, advisors, IPN officials, including contact with an academic specialized in absorption capacities to express their comments on this thesis. The inicial findings had shown two lines: i) exist institutional, technological and organizational management elements that encourage and/or limit the creation of absorption capacities in this scientific and technological laboratory and, ii) UDIBI has had created a set of multiple transfer technology of knowledge mechanisms which have had permitted to build a huge base of prior knowledge.

Keywords: absorptive capabilities, biopharmaceutical industry, high research and development intensity industries, knowledge management, transfer of knowledge

Procedia PDF Downloads 190
47 Female Subjectivity in William Faulkner's Light in August

Authors: Azza Zagouani

Abstract:

Introduction: In the work of William Faulkner, characters often evade the boundaries and categories of patriarchal standards of order. Female characters like Lena Grove and Joanna Burden cross thresholds in attempts to gain liberation, while others fail to do so. They stand as non-conformists and refuse established patterns of feminine behavior, such as marriage and motherhood after. They refute submissiveness, domesticity and abstinence to reshape their own identities. The presence of independent and creative women represents new, unconventional images of female subjectivity. This paper will examine the structures of submission and oppression faced by Lena and Joanna, and will show how, in the end, they reshape themselves and their identities, and disrupt or even destroy patriarchal structures. Objectives: Participants will understand through the examples of Lena Grove and Joanna Burden that female subjectivities are constructions, and are constantly subject to change. Approaches: Two approaches will be used in the analysis of the subjectivity formation of Lena Grove and Joanna Burden. Following the arguments propounded by Judith Butler, We explore the ways in which Lena Grove maneuvers around the restrictions and the limitations imposed on her without any physical or psychological violence. She does this by properly performing the roles prescribed to her gendered body. Her repetitious performances of these roles are both the ones that are constructed to confine women and the vehicle for her travel. Her performance parodies the prescriptive roles and thereby reveals that they are cultural constructions. Second, We will explore the argument propounded by Kristeva that subjectivity is always in a state of development because we are always changing in context with changing circumstances. For example, in Light in August, Lena Grove changes the way she defines herself in light of the events of the novel. Also, Kristeva talks about stages of development: the semiotic stage and the symbolic stage. In Light in August, Joanna shows different levels of subjectivity as time passes. Early in the novel, Joanna is very connected to her upbringing. This suggests Kristeva’s concept of the semiotic, in which the daughter identifies closely to her parents. Kristeva relates the semiotic to a strong daughter/mother connection, but in the novel it is strong daughter/father/grandfather identification instead. Then as Joanna becomes sexually involved with Joe, she breaks off, and seems to go into an identity crisis. To me, this represents Kristeva’s move from the semiotic to the symbolic. When Joanna returns to a religious fanaticism, she is returning to a semiotic state. Detailed outline: At the outset of this paper, We will investigate the subjugation of women: social constraints, and the formation of the feminine identity in Light in August. Then, through the examples of Lena Grove’s attempt to cross the boundaries of community moralities and Joanna Burden’s refusal to submit to the standards of submissiveness, domesticity, and obstinance, We will reveal the tension between progressive conceptions of individual freedom and social constraints that limit this freedom. In the second part of the paper, We will underscore the rhetoric of femininity in Light in August: subjugation through naming. The implications of both female’s names offer a powerful contrast between the two different forms of subjectivity. Conclusion: Through Faulkner’s novel, We demonstrate that female subjectivity is an open-ended issue. The spiral shaping of its form maintains its characteristics as a process changing according to different circumstances.

Keywords: female subjectivity, Faulkner’s light August, gender, sexuality, diversity

Procedia PDF Downloads 352
46 Autonomous Strategic Aircraft Deconfliction in a Multi-Vehicle Low Altitude Urban Environment

Authors: Loyd R. Hook, Maryam Moharek

Abstract:

With the envisioned future growth of low altitude urban aircraft operations for airborne delivery service and advanced air mobility, strategies to coordinate and deconflict aircraft flight paths must be prioritized. Autonomous coordination and planning of flight trajectories is the preferred approach to the future vision in order to increase safety, density, and efficiency over manual methods employed today. Difficulties arise because any conflict resolution must be constrained by all other aircraft, all airspace restrictions, and all ground-based obstacles in the vicinity. These considerations make pair-wise tactical deconfliction difficult at best and unlikely to find a suitable solution for the entire system of vehicles. In addition, more traditional methods which rely on long time scales and large protected zones will artificially limit vehicle density and drastically decrease efficiency. Instead, strategic planning, which is able to respond to highly dynamic conditions and still account for high density operations, will be required to coordinate multiple vehicles in the highly constrained low altitude urban environment. This paper develops and evaluates such a planning algorithm which can be implemented autonomously across multiple aircraft and situations. Data from this evaluation provide promising results with simulations showing up to 10 aircraft deconflicted through a relatively narrow low-altitude urban canyon without any vehicle to vehicle or obstacle conflict. The algorithm achieves this level of coordination beginning with the assumption that each vehicle is controlled to follow an independently constructed flight path, which is itself free of obstacle conflict and restricted airspace. Then, by preferencing speed change deconfliction maneuvers constrained by the vehicles flight envelope, vehicles can remain as close to the original planned path and prevent cascading vehicle to vehicle conflicts. Performing the search for a set of commands which can simultaneously ensure separation for each pair-wise aircraft interaction and optimize the total velocities of all the aircraft is further complicated by the fact that each aircraft's flight plan could contain multiple segments. This means that relative velocities will change when any aircraft achieves a waypoint and changes course. Additionally, the timing of when that aircraft will achieve a waypoint (or, more directly, the order upon which all of the aircraft will achieve their respective waypoints) will change with the commanded speed. Put all together, the continuous relative velocity of each vehicle pair and the discretized change in relative velocity at waypoints resembles a hybrid reachability problem - a form of control reachability. This paper proposes two methods for finding solutions to these multi-body problems. First, an analytical formulation of the continuous problem is developed with an exhaustive search of the combined state space. However, because of computational complexity, this technique is only computable for pairwise interactions. For more complicated scenarios, including the proposed 10 vehicle example, a discretized search space is used, and a depth-first search with early stopping is employed to find the first solution that solves the constraints.

Keywords: strategic planning, autonomous, aircraft, deconfliction

Procedia PDF Downloads 69
45 Advancing Early Intervention Strategies for United States Adolescents and Young Adults with Schizophrenia in the Post-COVID-19 Era

Authors: Peggy M. Randon, Lisa Randon

Abstract:

Introduction: The post-COVID-19 era has presented unique challenges for addressing complex mental health issues, particularly due to exacerbated stress, increased social isolation, and disrupted continuity of care. This article outlines relevant health disparities and policy implications within the context of the United States while maintaining international relevance. Methods: A comprehensive literature review (including studies, reports, and policy documents) was conducted to examine concerns related to childhood-onset schizophrenia and the impact on patients and their families. Qualitative and quantitative data were synthesized to provide insights into the complex etiology of schizophrenia, the effects of the pandemic, and the challenges faced by socioeconomically disadvantaged populations. Case studies were employed to illustrate real-world examples and areas requiring policy reform. Results: Early intervention in childhood is crucial for preventing or mitigating the long-term impact of complex psychotic disorders, particularly schizophrenia. A comprehensive understanding of the genetic, environmental, and physiological factors contributing to the development of schizophrenia is essential. The COVID-19 pandemic worsened symptoms and disrupted treatment for many adolescent patients with schizophrenia, emphasizing the need for adaptive interventions and the utilization of virtual platforms. Health disparities, including stigma, financial constraints, and language or cultural barriers, further limit access to care, especially for socioeconomically disadvantaged populations. Policy implications: Current US health policies inadequately support patients with schizophrenia. The limited availability of longitudinal care, insufficient resources for families, and stigmatization represent ongoing policy challenges. Addressing these issues necessitates increased research funding, improved access to affordable treatment plans, and cultural competency training for healthcare providers. Public awareness campaigns are crucial to promote knowledge, awareness, and acceptance of mental health disorders. Conclusion: The unique challenges faced by children and families in the US affected by schizophrenia and other psychotic disorders have yet to be adequately addressed on institutional and systemic levels. The relevance of findings to an international audience is emphasized by examining the complex factors contributing to the onset of psychotic disorders and their global policy implications. The broad impact of the COVID-19 pandemic on mental health underscores the need for adaptive interventions and global responses. Addressing policy challenges, improving access to care, and reducing the stigma associated with mental health disorders are crucial steps toward enhancing the lives of adolescents and young adults with schizophrenia and their family members. The implementation of virtual platforms can help overcome barriers and ensure equitable access to support and resources for all patients, enabling them to lead healthy and fulfilling lives.

Keywords: childhood, schizophrenia, policy, United, States, health, disparities

Procedia PDF Downloads 47
44 Explanation of Sentinel-1 Sigma 0 by Sentinel-2 Products in Terms of Crop Water Stress Monitoring

Authors: Katerina Krizova, Inigo Molina

Abstract:

The ongoing climate change affects various natural processes resulting in significant changes in human life. Since there is still a growing human population on the planet with more or less limited resources, agricultural production became an issue and a satisfactory amount of food has to be reassured. To achieve this, agriculture is being studied in a very wide context. The main aim here is to increase primary production on a spatial unit while consuming as low amounts of resources as possible. In Europe, nowadays, the staple issue comes from significantly changing the spatial and temporal distribution of precipitation. Recent growing seasons have been considerably affected by long drought periods that have led to quantitative as well as qualitative yield losses. To cope with such kind of conditions, new techniques and technologies are being implemented in current practices. However, behind assessing the right management, there is always a set of the necessary information about plot properties that need to be acquired. Remotely sensed data had gained attention in recent decades since they provide spatial information about the studied surface based on its spectral behavior. A number of space platforms have been launched carrying various types of sensors. Spectral indices based on calculations with reflectance in visible and NIR bands are nowadays quite commonly used to describe the crop status. However, there is still the staple limit by this kind of data - cloudiness. Relatively frequent revisit of modern satellites cannot be fully utilized since the information is hidden under the clouds. Therefore, microwave remote sensing, which can penetrate the atmosphere, is on its rise today. The scientific literature describes the potential of radar data to estimate staple soil (roughness, moisture) and vegetation (LAI, biomass, height) properties. Although all of these are highly demanded in terms of agricultural monitoring, the crop moisture content is the utmost important parameter in terms of agricultural drought monitoring. The idea behind this study was to exploit the unique combination of SAR (Sentinel-1) and optical (Sentinel-2) data from one provider (ESA) to describe potential crop water stress during dry cropping season of 2019 at six winter wheat plots in the central Czech Republic. For the period of January to August, Sentinel-1 and Sentinel-2 images were obtained and processed. Sentinel-1 imagery carries information about C-band backscatter in two polarisations (VV, VH). Sentinel-2 was used to derive vegetation properties (LAI, FCV, NDWI, and SAVI) as support for Sentinel-1 results. For each term and plot, summary statistics were performed, including precipitation data and soil moisture content obtained through data loggers. Results were presented as summary layouts of VV and VH polarisations and related plots describing other properties. All plots performed along with the principle of the basic SAR backscatter equation. Considering the needs of practical applications, the vegetation moisture content may be assessed using SAR data to predict the drought impact on the final product quality and yields independently of cloud cover over the studied scene.

Keywords: precision agriculture, remote sensing, Sentinel-1, SAR, water content

Procedia PDF Downloads 95
43 Economic Impacts of Sanctuary and Immigration and Customs Enforcement Policies Inclusive and Exclusive Institutions

Authors: Alexander David Natanson

Abstract:

This paper focuses on the effect of Sanctuary and Immigration and Customs Enforcement (ICE) policies on local economies. "Sanctuary cities" refers to municipal jurisdictions that limit their cooperation with the federal government's efforts to enforce immigration. Using county-level data from the American Community Survey and ICE data on economic indicators from 2006 to 2018, this study isolates the effects of local immigration policies on U.S. counties. The investigation is accomplished by simultaneously studying the policies' effects in counties where immigrants' families are persecuted via collaboration with Immigration and Customs Enforcement (ICE), in contrast to counties that provide protections. The analysis includes a difference-in-difference & two-way fixed effect model. Results are robust to nearest-neighbor matching, after the random assignment of treatment, after running estimations using different cutoffs for immigration policies, and with a regression discontinuity model comparing bordering counties with opposite policies. Results are also robust after restricting the data to a single-year policy adoption, using the Sun and Abraham estimator, and with event-study estimation to deal with the staggered treatment issue. In addition, the study reverses the estimation to understand what drives the decision to choose policies to detect the presence of reverse causality biases in the estimated policy impact on economic factors. The evidence demonstrates that providing protections to undocumented immigrants increases economic activity. The estimates show gains in per capita income ranging from 3.1 to 7.2, median wages between 1.7 to 2.6, and GDP between 2.4 to 4.1 percent. Regarding labor, sanctuary counties saw increases in total employment between 2.3 to 4 percent, and the unemployment rate declined from 12 to 17 percent. The data further shows that ICE policies have no statistically significant effects on income, median wages, or GDP but adverse effects on total employment, with declines from 1 to 2 percent, mostly in rural counties, and an increase in unemployment of around 7 percent in urban counties. In addition, results show a decline in the foreign-born population in ICE counties but no changes in sanctuary counties. The study also finds similar results for sanctuary counties when separating the data between urban, rural, educational attainment, gender, ethnic groups, economic quintiles, and the number of business establishments. The takeaway from this study is that institutional inclusion creates the dynamic nature of an economy, as inclusion allows for economic expansion due to the extension of fundamental freedoms to newcomers. Inclusive policies show positive effects on economic outcomes with no evident increase in population. To make sense of these results, the hypothesis and theoretical model propose that inclusive immigration policies play an essential role in conditioning the effect of immigration by decreasing uncertainties and constraints for immigrants' interaction in their communities, decreasing the cost from fear of deportation or the constant fear of criminalization and optimize their human capital.

Keywords: inclusive and exclusive institutions, post matching, fixed effect, time trend, regression discontinuity, difference-in-difference, randomization inference and sun, Abraham estimator

Procedia PDF Downloads 57
42 The 5-HT1A Receptor Biased Agonists, NLX-101 and NLX-204, Elicit Rapid-Acting Antidepressant Activity in Rat Similar to Ketamine and via GABAergic Mechanisms

Authors: A. Newman-Tancredi, R. Depoortère, P. Gruca, E. Litwa, M. Lason, M. Papp

Abstract:

The N-methyl-D-aspartic acid (NMDA) receptor antagonist, ketamine, can elicit rapid-acting antidepressant (RAAD) effects in treatment-resistant patients, but it requires parenteral co-administration with a classical antidepressant under medical supervision. In addition, ketamine can also produce serious side effects that limit its long-term use, and there is much interest in identifying RAADs based on ketamine’s mechanism of action but with safer profiles. Ketamine elicits GABAergic interneuron inhibition, glutamatergic neuron stimulation, and, notably, activation of serotonin 5-HT1A receptors in the prefrontal cortex (PFC). Direct activation of the latter receptor subpopulation with selective ‘biased agonists’ may therefore be a promising strategy to identify novel RAADs and, consistent with this hypothesis, the prototypical cortical biased agonist, NLX-101, exhibited robust RAAD-like activity in the chronic mild stress model of depression (CMS). The present study compared the effects of a novel, selective 5-HT1A receptor-biased agonist, NLX-204, with those of ketamine and NLX-101. Materials and methods: CMS procedure was conducted on Wistar rats; drugs were administered either intraperitoneally (i.p.) or by bilateral intracortical microinjection. Ketamine: 10 mg/kg i.p. or 10 µg/side in PFC; NLX-204 and NLX-101: 0.08 and 0.16 mg/kg i.p. or 16 µg/side in PFC. In addition, interaction studies were carried out with systemic NLX-204 or NLX-101 (each at 0.16 mg/kg i.p.) in combination with intracortical WAY-100635 (selective 5-HT1A receptor antagonist; 2 µg/side) or muscimol (GABA-A receptor agonist, 12.5 ng/side). Anhedonia was assessed by CMS-induced decrease in sucrose solution consumption; anxiety-like behavior was assessed using the Elevated Plus Maze (EPM), and cognitive impairment was assessed by the Novel Object Recognition (NOR) test. Results: A single administration of NLX-204 was sufficient to reverse the CMS-induced deficit in sucrose consumption, similarly to ketamine and NLX-101. NLX-204 also reduced CMS-induced anxiety in the EPM and abolished CMS-induced NOR deficits. These effects were maintained (EPM and NOR) or enhanced (sucrose consumption) over a subsequent 2-week period of treatment. The anti-anhedonic response of the drugs was also maintained for several weeks Following treatment discontinuation, suggesting that they had sustained effects on neuronal networks. A single PFC administration of NLX-204 reversed deficient sucrose consumption, similarly to ketamine and NLX-101. Moreover, the anti-anhedonic activities of systemic NLX-204 and NLX 101 were abolished by coadministration with intracortical WAY-100635 or muscimol. Conclusions: (i) The antidepressant-like activity of NLX-204 in the rat CMS model was as rapid as that of ketamine or NLX-101, supporting targeting cortical 5-HT1A receptors with selective, biased agonists to achieve RAAD effects. (ii)The anti-anhedonic activity of systemic NLX-204 was mimicked by local administration of the compound in the PFC, confirming the involvement of cortical circuits in its RAAD-like effects. (iii) Notably, the effects of systemic NLX-204 and NLX-101 were abolished by PFC administration of muscimol, indicating that they act by (indirectly) eliciting a reduction in cortical GABAergic neurotransmission. This is consistent with ketamine’s mechanism of action and suggests that there are converging NMDA and 5-HT1A receptor signaling cascades in PFC underlying the RAAD-like activities of ketamine and NLX-204. Acknowledgements: The study was financially supported by NCN grant no. 2019/35/B/NZ7/00787.

Keywords: depression, ketamine, serotonin, 5-HT1A receptor, chronic mild stress

Procedia PDF Downloads 74
41 Design, Fabrication and Analysis of Molded and Direct 3D-Printed Soft Pneumatic Actuators

Authors: N. Naz, A. D. Domenico, M. N. Huda

Abstract:

Soft Robotics is a rapidly growing multidisciplinary field where robots are fabricated using highly deformable materials motivated by bioinspired designs. The high dexterity and adaptability to the external environments during contact make soft robots ideal for applications such as gripping delicate objects, locomotion, and biomedical devices. The actuation system of soft robots mainly includes fluidic, tendon-driven, and smart material actuation. Among them, Soft Pneumatic Actuator, also known as SPA, remains the most popular choice due to its flexibility, safety, easy implementation, and cost-effectiveness. However, at present, most of the fabrication of SPA is still based on traditional molding and casting techniques where the mold is 3d printed into which silicone rubber is cast and consolidated. This conventional method is time-consuming and involves intensive manual labour with the limitation of repeatability and accuracy in design. Recent advancements in direct 3d printing of different soft materials can significantly reduce the repetitive manual task with an ability to fabricate complex geometries and multicomponent designs in a single manufacturing step. The aim of this research work is to design and analyse the Soft Pneumatic Actuator (SPA) utilizing both conventional casting and modern direct 3d printing technologies. The mold of the SPA for traditional casting is 3d printed using fused deposition modeling (FDM) with the polylactic acid (PLA) thermoplastic wire. Hyperelastic soft materials such as Ecoflex-0030/0050 are cast into the mold and consolidated using a lab oven. The bending behaviour is observed experimentally with different pressures of air compressor to ensure uniform bending without any failure. For direct 3D-printing of SPA fused deposition modeling (FDM) with thermoplastic polyurethane (TPU) and stereolithography (SLA) with an elastic resin are used. The actuator is modeled using the finite element method (FEM) to analyse the nonlinear bending behaviour, stress concentration and strain distribution of different hyperelastic materials after pressurization. FEM analysis is carried out using Ansys Workbench software with a Yeon-2nd order hyperelastic material model. FEM includes long-shape deformation, contact between surfaces, and gravity influences. For mesh generation, quadratic tetrahedron, hybrid, and constant pressure mesh are used. SPA is connected to a baseplate that is in connection with the air compressor. A fixed boundary is applied on the baseplate, and static pressure is applied orthogonally to all surfaces of the internal chambers and channels with a closed continuum model. The simulated results from FEM are compared with the experimental results. The experiments are performed in a laboratory set-up where the developed SPA is connected to a compressed air source with a pressure gauge. A comparison study based on performance analysis is done between FDM and SLA printed SPA with the molded counterparts. Furthermore, the molded and 3d printed SPA has been used to develop a three-finger soft pneumatic gripper and has been tested for handling delicate objects.

Keywords: finite element method, fused deposition modeling, hyperelastic, soft pneumatic actuator

Procedia PDF Downloads 55
40 Climate Change Implications on Occupational Health and Productivity in Tropical Countries: Study Results from India

Authors: Vidhya Venugopal, Jeremiah Chinnadurai, Rebekah A. I. Lucas, Tord Kjellstrom, Bruno Lemke

Abstract:

Introduction: The effects of climate change (CC) are largely discussed across the globe in terms of impacts on the environment and the general population, but the impacts on workers remain largely unexplored. The predicted rise in temperatures and heat events in the CC scenario have health implications on millions of workers in physically exerting jobs. The current health and productivity risks associated with heat exposures are characterized, future risk estimates as temperature rises and recommendations towards developing protective and preventive occupational health and safety guidelines for India are discussed. Methodology: Cross-sectional studies were conducted in several occupational sectors with workers engaged in moderate to heavy labor (n=1580). Quantitative data on heat exposures (WBGT°C), physiological heat strain indicators viz., Core temperature (CBT), Urine specific gravity (USG), Sweat rate (SwR) and qualitative data on heat-related health symptoms and productivity losses were collected. Data were analyzed for associations between heat exposures, health and productivity outcomes related to heat stress. Findings: Heat conditions exceeded the Threshold Limit Value (TLV) for safe manual work in 66% of the workers across several sectors (Avg.WBGT of 28.7°C±3.1°C). Widespread concerns about heat-related health outcomes (86%) were prevalent among workers exposed to high TLVs, with excessive sweating, fatigue and tiredness being commonly reported by workers. The heat stress indicators, core temperature (14%), Sweat rate (8%) and USG (9%), were above normal levels in the study population. A significant association was found between rise in Core Temperatures and WBGT exposures (p=0.000179) Elevated USG and SwR in the worker population indicate moderate dehydration, with potential risks of developing heat-related illnesses. In a steel industry with high heat exposures, an alarming 9% prevalence of kidney/urogenital anomalies was observed in a young workforce. Heat exposures above TLVs were associated with significantly increased odds of various adverse health outcomes (OR=2.43, 95% CI 1.88 to 3.13, p-value = <0.0001) and productivity losses (OR=1.79, 95% CI 1.32 to 2.4, p-value = 0.0002). Rough estimates for the number of workers who would be subjected to higher than TLV levels in the various RCP scenarios are RCP2.6 =79%, RCP4.5 & RCP6 = 81% and at RCP 8.5 = 85%. Rising temperatures due to CC has the capacity to further reduce already compromised health and productivity by subjecting the workers to increased heat exposures in the RCP scenarios are of concern for the country’s occupational health and economy. Conclusion: The findings of this study clearly identify that health protection from hot weather will become increasingly necessary in the Indian subcontinent and understanding the various adaptation techniques needs urgent attention. Further research with a multi-targeted approach to develop strategies for implementing interventions to protect the millions of workers is imperative. Approaches to include health aspects of climate change within sectoral and climate change specific policies should be encouraged, via a number of mechanisms, such as the “Health in All Policies” approach to avert adverse health and productivity consequences as climate change proceeds.

Keywords: heat stress, occupational health, productivity loss, heat strain, adverse health outcomes

Procedia PDF Downloads 295
39 Wear Resistance in Dry and Lubricated Conditions of Hard-anodized EN AW-4006 Aluminum Alloy

Authors: C. Soffritti, A. Fortini, E. Baroni, M. Merlin, G. L. Garagnani

Abstract:

Aluminum alloys are widely used in many engineering applications due to their advantages such ashigh electrical and thermal conductivities, low density, high strength to weight ratio, and good corrosion resistance. However, their low hardness and poor tribological properties still limit their use in industrial fields requiring sliding contacts. Hard anodizing is one of the most common solution for overcoming issues concerning the insufficient friction resistance of aluminum alloys. In this work, the tribological behavior ofhard-anodized AW-4006 aluminum alloys in dry and lubricated conditions was evaluated. Three different hard-anodizing treatments were selected: a conventional one (HA) and two innovative golden hard-anodizing treatments (named G and GP, respectively), which involve the sealing of the porosity of anodic aluminum oxides (AAO) with silver ions at different temperatures. Before wear tests, all AAO layers were characterized by scanning electron microscopy (VPSEM/EDS), X-ray diffractometry, roughness (Ra and Rz), microhardness (HV0.01), nanoindentation, and scratch tests. Wear tests were carried out according to the ASTM G99-17 standard using a ball-on-disc tribometer. The tests were performed in triplicate under a 2 Hz constant frequency oscillatory motion, a maximum linear speed of 0.1 m/s, normal loads of 5, 10, and 15 N, and a sliding distance of 200 m. A 100Cr6 steel ball10 mm in diameter was used as counterpart material. All tests were conducted at room temperature, in dry and lubricated conditions. Considering the more recent regulations about the environmental hazard, four bio-lubricants were considered after assessing their chemical composition (in terms of Unsaturation Number, UN) and viscosity: olive, peanut, sunflower, and soybean oils. The friction coefficient was provided by the equipment. The wear rate of anodized surfaces was evaluated by measuring the cross-section area of the wear track with a non-contact 3D profilometer. Each area value, obtained as an average of four measurements of cross-section areas along the track, was used to determine the wear volume. The worn surfaces were analyzed by VPSEM/EDS. Finally, in agreement with DoE methodology, a statistical analysis was carried out to identify the most influencing factors on the friction coefficients and wear rates. In all conditions, results show that the friction coefficient increased with raising the normal load. Considering the wear tests in dry sliding conditions, irrespective of the type of anodizing treatments, metal transfer between the mating materials was observed over the anodic aluminum oxides. During sliding at higher loads, the detachment of the metallic film also caused the delamination of some regions of the wear track. For the wear tests in lubricated conditions, the natural oils with high percentages of oleic acid (i.e., olive and peanut oils) maintained high friction coefficients and low wear rates. Irrespective of the type of oil, smallmicrocraks were visible over the AAO layers. Based on the statistical analysis, the type of anodizing treatment and magnitude of applied load were the main factors of influence on the friction coefficient and wear rate values. Nevertheless, an interaction between bio-lubricants and load magnitude could occur during the tests.

Keywords: hard anodizing treatment, silver ions, bio-lubricants, sliding wear, statistical analysis

Procedia PDF Downloads 113
38 Relationship between Illegal Wildlife Trade and Community Conservation: A Case Study of the Chepang Community in Nepal

Authors: Vasundhara H. Krishnani, Ajay Saini, Dibesh Karmacharya, Salit Kark

Abstract:

Illegal Wildlife Trade is one of the most pressing global conservation challenges. Unregulated wildlife trade can threaten biodiversity, contribute to habitat loss, limit sustainable development efforts, and expedite species declines and extinctions. In low-income and middle-income countries, such as Nepal and other countries in Asia and Africa, many of the people engaged in the early stages of illegal wildlife trade, which includes the hunting and transportation of wildlife, belong to Indigenous tribes and local communities.These countries primarily rely on punitive measures to prevent and suppress Illegal Wildlife Trade. For example, in Nepal, people involved in wildlife crimes can often be sentenced to incarceration and a hefty fine and serve up to 15 years in prison. Despite these harsh punitive measures, illegal wildlife trade remains a significant conservation challenge in many countries. The aim of this study was to examine factors affecting the participation of Indigenous communities in Illegal Wildlife Trade while recording the experiences of members of the Indigenous Chepang community, some of whom were imprisoned for their alleged involvement in rhino poaching. Chepangs, belonging to traditionally a hunter-gatherer community, are often considered an isolated and marginalized Indigenous community, some of whom live around the Chitwan National Park in Nepal. Established in 1973, Chitwan National Park is situated in the Chitwan Valley of Nepal and was one of the first regions that was declared as a protected area in Nepal, aiming to protect the one-horned rhinoceros as a flagship species. Conducted over a period of three years, this study used semi-structured interviews and focus group discussions to collect data from Illegal Wildlife Trade offenders, family members of offenders, community Elders, NGO personnel, community forest representatives, Chepang community representatives, and Government school teachers from the region surrounding Chitwan National Park. The study also examined the social, cultural, health, and financial impacts that the imprisonment of offenders had on the families of the community members, especially women and children. The results suggest that involvement of the members of the Chepang community living around Chitwan National Park in the poaching of the one-horned rhinoceros (Rhinoceros unicornis) can be attributed to a range of factors, some of which include: lack of livelihood opportunities, lack of awareness regarding wildlife rules and regulations and poverty.This work emphasises the need for raising awareness and building programs to enhance alternative livelihood training and empower indigenous and marginalised communities that provide sustainable alternatives. Furthermore, the issue needs to be addressed as a community solution which includes all community members. We suggest this multi-pronged approach can benefit wildlife conservation by reducing illegal poaching and wildlife trade, as well as community conservation in regions with similar challenges. By actively involving and empowering local communities, the communities become key stakeholders in the conservation process. This involvement contributes to protecting wildlife and natural ecosystems while simultaneously providing sustainable livelihood options for local communities.

Keywords: alternative livelihoods, chepang community, illegal wildlife trade, low-and middle-income countries, nepal, one-horned rhinoceros

Procedia PDF Downloads 62
37 Chemical Synthesis and Microwave Sintering of SnO2-Based Nanoparticles for Varistor Films

Authors: Glauco M. M. M. Lustosa, João Paulo C. Costa, Leinig Antônio Perazolli, Maria Aparecida Zaghete

Abstract:

SnO2 has electrical conductivity due to the excess of electrons and structural defects, being its electrical behavior highly dependent on sintering temperature and chemical composition. The addition of metals modifiers into the crystalline structure can improve and controlling the behavior of some semiconductor oxides that can therefore develop different applications such as varistors (ceramic with non-ohmic behavior between current and voltage, i.e. conductive during normal operation and resistive during overvoltage). The polymeric precursor method, based on the complexation reaction between metal ion and policarboxylic acid and then polymerized with ethylene glycol, was used to obtain nanopowders ceramic. The metal immobilization reduces its segregation during the decomposition of the polyester resulting in a crystalline oxide with high chemical homogeneity. The preparation of films from ceramics nanoparticles using electrophoretic deposition method (EPD) brings prospects for a new generation of smaller size devices with easy integration technology. EPD allows to control time and current and therefore it can have control of the thickness, surface roughness and the film density, quickly and with low production costs. The sintering process is key to control size and grain boundary density of the film. In this step, there is the diffusion of metals that promote densification and control of intrinsic defects or change these defects which will form and modify the potential barrier in the grain boundary. The use of microwave oven for sintering is an advantageous process due to the fast and homogeneous heating rate, promoting the diffusion and densification without irregular grain growth. This research was done a comparative study of sintering temperature by use of zinc as modifier agent to verify the influence on sintering step aiming to promote densification and grain growth, which influences the potential barrier formation and then changed the electrical behavior. SnO2-nanoparticles were obtained with 1 %mol of ZnO + 0.05 %mol of Nb2O5 (SZN), deposited as film through EPD (voltage 2 kV, time of 10 min) on Si/Pt substrate. Sintering was made in a microwave oven at 800, 900 and 1000 °C. For complete coverage of the substrate by nanoparticles with low surface roughness and uniform thickness was added 0.02 g of solid iodine in alcoholic suspension SnO2 to increase particle surface charge. They were also used magneto in EPD system that improved the deposition rate forming a compact film. Using a scanning electron microscope of high resolution (SEM_FEG) it was observed nanoparticles with average size between 10-20 nm, after sintering the average size was 150 to 200 nm and thickness of 5 µm. Also, it was verified that the temperature at 1000 °C was the most efficient in sintering. The best sintering time was also recorded and determined as 40 minutes. After sintering, the films were recovered with Cr3+ ions layer by EPD, then the films were again thermally treated. The electrical characterizations (nonlinear coefficient of 11.4, voltage rupture of ~60 V and leakage current = 4.8x10−6 A), allow considering the new methodology suitable for prepare SnO2-based varistor applied for development of electrical protection devices for low voltage.

Keywords: chemical synthesis, electrophoretic deposition, microwave sintering, tin dioxide

Procedia PDF Downloads 238
36 Rapid, Direct, Real-Time Method for Bacteria Detection on Surfaces

Authors: Evgenia Iakovleva, Juha Koivisto, Pasi Karppinen, J. Inkinen, Mikko Alava

Abstract:

Preventing the spread of infectious diseases throughout the worldwide is one of the most important tasks of modern health care. Infectious diseases not only account for one fifth of the deaths in the world, but also cause many pathological complications for the human health. Touch surfaces pose an important vector for the spread of infections by varying microorganisms, including antimicrobial resistant organisms. Further, antimicrobial resistance is reply of bacteria to the overused or inappropriate used of antibiotics everywhere. The biggest challenges in bacterial detection by existing methods are non-direct determination, long time of analysis, the sample preparation, use of chemicals and expensive equipment, and availability of qualified specialists. Therefore, a high-performance, rapid, real-time detection is demanded in rapid practical bacterial detection and to control the epidemiological hazard. Among the known methods for determining bacteria on the surfaces, Hyperspectral methods can be used as direct and rapid methods for microorganism detection on different kind of surfaces based on fluorescence without sampling, sample preparation and chemicals. The aim of this study was to assess the relevance of such systems to remote sensing of surfaces for microorganisms detection to prevent a global spread of infectious diseases. Bacillus subtilis and Escherichia coli with different concentrations (from 0 to 10x8 cell/100µL) were detected with hyperspectral camera using different filters as visible visualization of bacteria and background spots on the steel plate. A method of internal standards was applied for monitoring the correctness of the analysis results. Distances from sample to hyperspectral camera and light source are 25 cm and 40 cm, respectively. Each sample is optically imaged from the surface by hyperspectral imaging system, utilizing a JAI CM-140GE-UV camera. Light source is BeamZ FLATPAR DMX Tri-light, 3W tri-colour LEDs (red, blue and green). Light colors are changed through DMX USB Pro interface. The developed system was calibrated following a standard procedure of setting exposure and focused for light with λ=525 nm. The filter is ThorLabs KuriousTM hyperspectral filter controller with wavelengths from 420 to 720 nm. All data collection, pro-processing and multivariate analysis was performed using LabVIEW and Python software. The studied human eye visible and invisible bacterial stains clustered apart from a reference steel material by clustering analysis using different light sources and filter wavelengths. The calculation of random and systematic errors of the analysis results proved the applicability of the method in real conditions. Validation experiments have been carried out with photometry and ATP swab-test. The lower detection limit of developed method is several orders of magnitude lower than for both validation methods. All parameters of the experiments were the same, except for the light. Hyperspectral imaging method allows to separate not only bacteria and surfaces, but also different types of bacteria, such as Gram-negative Escherichia coli and Gram-positive Bacillus subtilis. Developed method allows skipping the sample preparation and the use of chemicals, unlike all other microbiological methods. The time of analysis with novel hyperspectral system is a few seconds, which is innovative in the field of microbiological tests.

Keywords: Escherichia coli, Bacillus subtilis, hyperspectral imaging, microorganisms detection

Procedia PDF Downloads 179
35 Thermally Stable Crystalline Triazine-Based Organic Polymeric Nanodendrites for Mercury(2+) Ion Sensing

Authors: Dimitra Das, Anuradha Mitra, Kalyan Kumar Chattopadhyay

Abstract:

Organic polymers, constructed from light elements like carbon, hydrogen, nitrogen, oxygen, sulphur, and boron atoms, are the emergent class of non-toxic, metal-free, environmental benign advanced materials. Covalent triazine-based polymers with a functional triazine group are significant class of organic materials due to their remarkable stability arising out of strong covalent bonds. They can conventionally form hydrogen bonds, favour π–π contacts, and they were recently revealed to be involved in interesting anion–π interactions. The present work mainly focuses upon the development of a single-crystalline, highly cross-linked triazine-based nitrogen-rich organic polymer with nanodendritic morphology and significant thermal stability. The polymer has been synthesized through hydrothermal treatment of melamine and ethylene glycol resulting in cross-polymerization via condensation-polymerization reaction. The crystal structure of the polymer has been evaluated by employing Rietveld whole profile fitting method. The polymer has been found to be composed of monoclinic melamine having space group P21/a. A detailed insight into the chemical structure of the as synthesized polymer has been elucidated by Fourier Transform Infrared Spectroscopy (FTIR) and Raman spectroscopic analysis. X-Ray Photoelectron Spectroscopic (XPS) analysis has also been carried out for further understanding of the different types of linkages required to create the backbone of the polymer. The unique rod-like morphology of the triazine based polymer has been revealed from the images obtained from Field Emission Scanning Electron Microscopy (FESEM) and Transmission Electron Microscopy (TEM). Interestingly, this polymer has been found to selectively detect mercury (Hg²⁺) ions at an extremely low concentration through fluorescent quenching with detection limit as low as 0.03 ppb. The high toxicity of mercury ions (Hg²⁺) arise from its strong affinity towards the sulphur atoms of biological building blocks. Even a trace quantity of this metal is dangerous for human health. Furthermore, owing to its small ionic radius and high solvation energy, Hg²⁺ ions remain encapsulated by water molecules making its detection a challenging task. There are some existing reports on fluorescent-based heavy metal ion sensors using covalent organic frameworks (COFs) but reports on mercury sensing using triazine based polymers are rather undeveloped. Thus, the importance of ultra-trace detection of Hg²⁺ ions with high level of selectivity and sensitivity has contemporary significance. A plausible sensing phenomenon by the polymer has been proposed to understand the applicability of the material as a potential sensor. The impressive sensitivity of the polymer sample towards Hg²⁺ is the very first report in the field of highly crystalline triazine based polymers (without the introduction of any sulphur groups or functionalization) towards mercury ion detection through photoluminescence quenching technique. This crystalline metal-free organic polymer being cheap, non-toxic and scalable has current relevance and could be a promising candidate for Hg²⁺ ion sensing at commercial level.

Keywords: fluorescence quenching , mercury ion sensing, single-crystalline, triazine-based polymer

Procedia PDF Downloads 98
34 Differential Expression Profile Analysis of DNA Repair Genes in Mycobacterium Leprae by qPCR

Authors: Mukul Sharma, Madhusmita Das, Sundeep Chaitanya Vedithi

Abstract:

Leprosy is a chronic human disease caused by Mycobacterium leprae, that cannot be cultured in vitro. Though treatable with multidrug therapy (MDT), recently, bacteria reported resistance to multiple antibiotics. Targeting DNA replication and repair pathways can serve as the foundation of developing new anti-leprosy drugs. Due to the absence of an axenic culture medium for the propagation of M. leprae, studying cellular processes, especially those belonging to DNA repair pathways, is challenging. Genomic understanding of M. Leprae harbors several protein-coding genes with no previously assigned function known as 'hypothetical proteins'. Here, we report identification and expression of known and hypothetical DNA repair genes from a human skin biopsy and mouse footpads that are involved in base excision repair, direct reversal repair, and SOS response. Initially, a bioinformatics approach was employed based on sequence similarity, identification of known protein domains to screen the hypothetical proteins in the genome of M. leprae, that are potentially related to DNA repair mechanisms. Before testing on clinical samples, pure stocks of bacterial reference DNA of M. leprae (NHDP63 strain) was used to construct standard graphs to validate and identify lower detection limit in the qPCR experiments. Primers were designed to amplify the respective transcripts, and PCR products of the predicted size were obtained. Later, excisional skin biopsies of newly diagnosed untreated, treated, and drug resistance leprosy cases from SIHR & LC hospital, Vellore, India were taken for the extraction of RNA. To determine the presence of the predicted transcripts, cDNA was generated from M. leprae mRNA isolated from clinically confirmed leprosy skin biopsy specimen across all the study groups. Melting curve analysis was performed to determine the integrity of the amplification and to rule out primer‑dimer formation. The Ct values obtained from qPCR were fitted to standard curve to determine transcript copy number. Same procedure was applied for M. leprae extracted after processing a footpad of nude mice of drug sensitive and drug resistant strains. 16S rRNA was used as positive control. Of all the 16 genes involved in BER, DR, and SOS, differential expression pattern of the genes was observed in terms of Ct values when compared to human samples; this was because of the different host and its immune response. However, no drastic variation in gene expression levels was observed in human samples except the nth gene. The higher expression of nth gene could be because of the mutations that may be associated with sequence diversity and drug resistance which suggests an important role in the repair mechanism and remains to be explored. In both human and mouse samples, SOS system – lexA and RecA, and BER genes AlkB and Ogt were expressing efficiently to deal with possible DNA damage. Together, the results of the present study suggest that DNA repair genes are constitutively expressed and may provide a reference for molecular diagnosis, therapeutic target selection, determination of treatment and prognostic judgment in M. leprae pathogenesis.

Keywords: DNA repair, human biopsy, hypothetical proteins, mouse footpads, Mycobacterium leprae, qPCR

Procedia PDF Downloads 77
33 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method

Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez

Abstract:

Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.

Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics

Procedia PDF Downloads 64
32 Assessment of Surface Water Quality in Belarus

Authors: Anastasiya Vouchak, Aliaksandr Volchak

Abstract:

Belarus is not short of water. However, there is a problem of water quality. Its pollution has both natural and man-made origin. This research is based on data from State Water Cadastre of the Republic of Belarus registered from 1994 to 2014. We analyzed changes in such hydro-chemical criteria as concentration of ammonium ions, suspended matter, dissolved oxygen, oil-products, nitrites, phosphates in water, dichromate value, water impurity index, 5-day biochemical oxygen demand (BOD). Pollution of water with ammonium ions was observed in Belarus rivers of the Western Dvina, Polota, Schara, Usha, Muhavets, Berzina, Plissa, Svisloch, Pripiat, Yaselda in 2006-2014. The threshold limit value (TLV) was 1.5-3 times as much. Concentration of ammonia in the Berezina exceeded 3 – 5 times the TLVs in 2006-2010. Maximum excess of TLV was registered in the Svisloch (10 km downstream of Minsk) in 2006-2007. It was over 4 mg/dm³ whereas the norm is 0.39 mg/dm³. In 1997 there were ammonia pollution spots in the Dnieper, the Berezina, and the Svisloch Rivers. Since 2006 we have observed pollution spots in the Neman, Ross, Vilia, Sozh, Gorin Rivers, the Osipovichi and Soligorsk reservoirs. Dichromate value exceeds the TLVs in 40% cases. The most polluted waters are the Muhavets, Berezina, Pripiat, Yaselda, Gorin Rivers, the Vileyka and Soligorsk reservoirs. The Western Dvina, Neman, Viliya, Schara, Svisloch, and Plissa Rivers are less polluted. The Dnieper is the cleanest in this respect. In terms of BOD, water is polluted in the Neman, Muhavets, Svisloch, Yaselda, Gorin Rivers, the Osipovichi, Zaslavl, and Soligorsk reservoirs. The Western Dvina, Polota, Sozh, Iputs Rivers and Lake Naroch are not polluted in this respect. This criterion has been decreasing in 33 out of 42 cases. The least suspended matter is in the Berezina, Sozh, Iputs Rivers and Lake Naroch. The muddiest water is in the Neman, Usha, Svisloch, Pripyat, Yaselda Rivers, the Osipovichi and Soligorsk reservoirs. Water impurity index shows reduction of this criterion at all gauge stations. Multi-year average values predominantly (66.6%) correspond to the third class of water quality, i.e. moderately polluted. They include the Western Dvina, Ross, Usha, Muhavets, Dnieper, Berezina, Plissa, Iputs, Pripyat, Yaselda, Gorin Rivers, the Osipovichi and Soligorsk reservoirs. Water in the Svisloch River downstream of Minsk is of the forth quality class, i.e. most polluted. In the rest cases (33.3%) water is relatively clean. They include the Lidea, Schara, Viliya, Sozh Rivers, Lake Lukoml, Lake Naroch, Vileyka and Zaslavl reservoirs. Multi-year average values range from 7.0 to 9.5 mg О₂/dm³. The Yaselda has the least value - 6.7 mg О₂/dm³. A shortage of dissolved oxygen was found in the Berezina (2010), the Yaselda (2007), the Plissa (2011-2014), the Soligorsk reservoir (1996). Contamination of water with oil-products was observed everywhere in 1994-1999. Some spots were found in the Western Dvina, Vilia, Usha, Dnieper in 2003-2006, in the Svisloch in 2002-2012. We are observing gradual decrease of oil pollutants in surface water. The quality of 67 % surface water is referred to as moderately polluted.

Keywords: belarus, hydro-chemical criteria, water pollution, water quality

Procedia PDF Downloads 125
31 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 186
30 Multi-Criteria Assessment of Biogas Feedstock

Authors: Rawan Hakawati, Beatrice Smyth, David Rooney, Geoffrey McCullough

Abstract:

Targets have been set in the EU to increase the share of renewable energy consumption to 20% by 2020, but developments have not occurred evenly across the member states. Northern Ireland is almost 90% dependent on imported fossil fuels. With such high energy dependency, Northern Ireland is particularly susceptible to the security of supply issues. Linked to fossil fuels are greenhouse gas emissions, and the EU plans to reduce emissions by 20% by 2020. The use of indigenously produced biomass could reduce both greenhouse gas emissions and external energy dependence. With a wide range of both crop and waste feedstock potentially available in Northern Ireland, anaerobic digestion has been put forward as a possible solution for renewable energy production, waste management, and greenhouse gas reduction. Not all feedstock, however, is the same, and an understanding of feedstock suitability is important for both plant operators and policy makers. The aim of this paper is to investigate biomass suitability for anaerobic digestion in Northern Ireland. It is also important that decisions are based on solid scientific evidence. For this reason, the methodology used is multi-criteria decision matrix analysis which takes multiple criteria into account simultaneously and ranks alternatives accordingly. The model uses the weighted sum method (which follows the Entropy Method to measure uncertainty using probability theory) to decide on weights. The Topsis method is utilized to carry out the mathematical analysis to provide the final scores. Feedstock that is currently available in Northern Ireland was classified into two categories: wastes (manure, sewage sludge and food waste) and energy crops, specifically grass silage. To select the most suitable feedstock, methane yield, feedstock availability, feedstock production cost, biogas production, calorific value, produced kilowatt-hours, dry matter content, and carbon to nitrogen ratio were assessed. The highest weight (0.249) corresponded to production cost reflecting a variation of £41 gate fee to 22£/tonne cost. The weights calculated found that grass silage was the most suitable feedstock. A sensitivity analysis was then conducted to investigate the impact of weights. The analysis used the Pugh Matrix Method which relies upon The Analytical Hierarchy Process and pairwise comparisons to determine a weighting for each criterion. The results showed that the highest weight (0.193) corresponded to biogas production indicating that grass silage and manure are the most suitable feedstock. Introducing co-digestion of two or more substrates can boost the biogas yield due to a synergistic effect induced by the feedstock to favor positive biological interactions. A further benefit of co-digesting manure is that the anaerobic digestion process also acts as a waste management strategy. From the research, it was concluded that energy from agricultural biomass is highly advantageous in Northern Ireland because it would increase the country's production of renewable energy, manage waste production, and would limit the production of greenhouse gases (current contribution from agriculture sector is 26%). Decision-making methods based on scientific evidence aid policy makers in classifying multiple criteria in a logical mathematical manner in order to reach a resolution.

Keywords: anaerobic digestion, biomass as feedstock, decision matrix, renewable energy

Procedia PDF Downloads 424
29 Integrated Approach Towards Safe Wastewater Reuse in Moroccan Agriculture

Authors: Zakia Hbellaq

Abstract:

The Mediterranean region is considered a hotbed for climate change. Morocco is a semi-arid Mediterranean country facing water shortages and poor water quality. Its limited water resources limit the activities of various economic sectors. Most of Morocco's territory is in arid and desert areas. The potential water resources are estimated at 22 billion m3, which is equivalent to about 700 m3/inhabitant/year, and Morocco is in a state of structural water stress. Strictly speaking, the Kingdom of Morocco is one of the “very riskiest” countries, according to the World Resources Institute (WRI), which oversees the calculation of water stress risk in 167 countries. The surprising results of the Institute (WRI) rank Morocco as one of the riskiest countries in terms of water scarcity, ranking 3.89 out of 5, thus occupying the 23rd place out of a total of 167 countries, which indicates that the demand for water exceeds the available resources. Agriculture with a score of 3.89 is most affected by water stress from irrigation and places a heavy burden on the water table. Irrigation is an unavoidable technical need and has undeniable economic and social benefits given the available resources and climatic conditions. Irrigation, and therefore the agricultural sector, currently uses 86% of its water resources, while industry uses 5.5%. Although its development has undeniable economic and social benefits, it also contributes to the overfishing of most groundwater resources and the surprising decline in levels and deterioration of water quality in some aquifers. In this context, REUSE is one of the proposed solutions to reduce the water footprint of the agricultural sector and alleviate the shortage of water resources. Indeed, wastewater reuse, also known as REUSE (reuse of treated wastewater), is a step forward not only for the circular economy but also for the future, especially in the context of climate change. In particular, water reuse provides an alternative to existing water supplies and can be used to improve water security, sustainability, and resilience. However, given the introduction of organic trace pollutants or, organic micro-pollutants, the absorption of emerging contaminants, and decreasing salinity, it is possible to tackle innovative capabilities to overcome these problems and ensure food and health safety. To this end, attention will be paid to the adoption of an integrated and attractive approach, based on the reinforcement and optimization of the treatments proposed for the elimination of the organic load with particular attention to the elimination of emerging pollutants, to achieve this goal. , membrane bioreactors (MBR) as stand-alone technologies are not able to meet the requirements of WHO guidelines. They will be combined with heterogeneous Fenton processes using persulfate or hydrogen peroxide oxidants. Similarly, adsorption and filtration are applied as tertiary treatment In addition, the evaluation of crop performance in terms of yield, productivity, quality, and safety, through the optimization of Trichoderma sp strains that will be used to increase crop resistance to abiotic stresses, as well as the use of modern omics tools such as transcriptomic analysis using RNA sequencing and methylation to identify adaptive traits and associated genetic diversity that is tolerant/resistant/resilient to biotic and abiotic stresses. Hence, ensuring this approach will undoubtedly alleviate water scarcity and, likewise, increase the negative and harmful impact of wastewater irrigation on the condition of crops and the health of their consumers.

Keywords: water scarcity, food security, irrigation, agricultural water footprint, reuse, emerging contaminants

Procedia PDF Downloads 115
28 Prevalence of Antibiotic-Resistant Bacteria Isolated from Fresh Vegetables Retailed in Eastern Spain

Authors: Miguel García-Ferrús, Yolanda Domínguez, M Angeles Castillo, M Antonia Ferrús, Ana Jiménez-Belenguer

Abstract:

Antibiotic resistance is a growing public health concern worldwide, and it is now regarded as a critical issue within the "One Health" approach that affects human and animal health, agriculture, and environmental waste management. This concept focuses on the interconnected nature of human, animal and environmental health, and WHO highlights zoonotic diseases, food safety, and antimicrobial resistance as three particularly relevant areas for this framework. Fresh vegetables are garnering attention in the food chain due to the presence of pathogens and because they can act as a reservoir for Antibiotic Resistance Bacteria (ARB) and Antibiotic Resistance Genes (ARG). These fresh products are frequently consumed raw, thereby contributing to the spread and transmission of antibiotic resistance. Therefore, the aim of this research was to study the microbiological quality, the prevalence of ARB, and their role in the dissemination of ARG in fresh vegetables intended for human consumption. For this purpose, 102 samples of fresh vegetables (30 lettuce, 30 cabbage, 18 strawberries and 24 spinach) from different retail establishments in Valencia (Spain) have been analyzed to determine their microbiological quality and their role in spreading ARB and ARG. The samples were collected and examined according to standardized methods for total viable bacteria, coliforms, Shiga toxin-producing Escherichia coli (STEC), Listeria monocytogenes and Salmonella spp. Isolation was made in culture media supplemented with antibiotics (cefotaxime and meropenem). A total of 239 strains resistant to beta-lactam antibiotics (Third-Generation Cephalosporins and Carbapenems) were isolated. Thirty Gram-negative isolates were selected and biochemically identified or partial sequencing of 16S rDNA. Their sensitivity to 12 antibiotic discs was determined using the Kirby-Bauer disc diffusion technique to different therapeutic groups. To determine the presence of ARG, PCR assays for the direct sample and selected isolate DNA were performed for main expanded spectrum beta-lactamase (ESBL)-, carbapenemase-encoding genes and plasmid-mediated quinolone resistance genes. From the total samples, 68% (24/24 spinach, 28/30 lettuce and 17/30 cabbage) showed total viable bacteria levels over the accepted standard 10(2)-10(5) cfu/g range; and 48% (24/24 spinach, 19/30 lettuce and 6/30) showed coliforms levels over the accepted standard 10(2)-10(4) cfu/g range. In 9 samples (3/24 spinach, 3/30 lettuce, 3/30 cabbage; 9/102 (9%)) E. coli levels were higher than the standard 10(3) cfu/g limit. Listeria monocytogenes, Salmonella and STEC have not been detected. Six different bacteria species were isolated from samples. Stenotrophomonas maltophilia (64%) was the prevalent species, followed by Acinetobacter pitii (14%) and Burkholderia cepacia (7%). All the isolates were resistant to at least one tested antibiotic, including meropenem (85%) and ceftazidime (46%). Of the total isolates, 86% were multidrug-resistant and 68% were ESBL productors. Results of PCR showed the presence of resistance genes to beta-lactams blaTEM (4%) and blaCMY-2 (4%), to carbapenemes blaOXA-48 (25%), blaVIM (7%), blaIMP (21%) and blaKPC (32%), and to quinolones QnrA (7%), QnrB (11%) and QnrS (18%). Thus, fresh vegetables harboring ARB and ARG constitute a potential risk to consumers. Further studies must be done to detect ARG and how they propagate in non-medical environments.

Keywords: ESBL, β-lactams, resistances, fresh vegetables.

Procedia PDF Downloads 38
27 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes

Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi

Abstract:

The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.

Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees

Procedia PDF Downloads 114
26 Medical Decision-Making in Advanced Dementia from the Family Caregiver Perspective: A Qualitative Study

Authors: Elzbieta Sikorska-Simmons

Abstract:

Advanced dementia is a progressive terminal brain disease that is accompanied by a syndrome of difficult to manage symptoms and complications that eventually lead to death. The management of advanced dementia poses major challenges to family caregivers who act as patient health care proxies in making medical treatment decisions. Little is known, however, about how they manage advanced dementia and how their treatment choices influence the quality of patient life. This prospective qualitative study examines the key medical treatment decisions that family caregivers make while managing advanced dementia. The term ‘family caregiver’ refers to a relative or a friend who is primarily responsible for managing patient’s medical care needs and legally authorized to give informed consent for medical treatments. Medical decision-making implies a process of choosing between treatment options in response to patient’s medical care needs (e.g., worsening comorbid conditions, pain, infections, acute medical events). Family caregivers engage in this process when they actively seek treatments or follow recommendations by healthcare professionals. Better understanding of medical decision-making from the family caregiver perspective is needed to design interventions that maximize the quality of patient life and limit inappropriate treatments. Data were collected in three waves of semi-structured interviews with 20 family caregivers for patients with advanced dementia. A purposive sample of 20 family caregivers was recruited from a senior care center in Central Florida. The qualitative personal interviews were conducted by the author in 4-5 months intervals. The ethical approval for the study was obtained prior to the data collection. Advanced dementia was operationalized as stage five or higher on the Global Deterioration Scale (GDS) (i.e., starting with the GDS score of five, patients are no longer able survive without assistance due to major cognitive and functional impairments). Information about patients’ GDS scores was obtained from the Center’s Medical Director, who had an in-depth knowledge of each patient’s health and medical treatment history. All interviews were audiotaped and transcribed verbatim. The qualitative data analysis was conducted to answer the following research questions: 1) what treatment decisions do family caregivers make while managing the symptoms of advanced dementia and 2) how do these treatment decisions influence the quality of patient life? To validate the results, the author asked each participating family caregiver if the summarized findings accurately captured his/her experiences. The identified medical decisions ranged from seeking specialist medical care to end-of-life care. The most common decisions were related to arranging medical appointments, medication management, seeking treatments for pain and other symptoms, nursing home placement, and accessing community-based healthcare services. The most challenging and consequential decisions were related to the management of acute complications, hospitalizations, and discontinuation of treatments. Decisions that had the greatest impact on the quality of patient life and survival were triggered by traumatic falls, worsening psychiatric symptoms, and aspiration pneumonia. The study findings have important implications for geriatric nurses in the context of patient/caregiver-centered dementia care. Innovative nursing approaches are needed to support family caregivers to effectively manage medical care needs of patients with advanced dementia.

Keywords: advanced dementia, family caregiver, medical decision-making, symptom management

Procedia PDF Downloads 99
25 Electroactive Ferrocenyl Dendrimers as Transducers for Fabrication of Label-Free Electrochemical Immunosensor

Authors: Sudeshna Chandra, Christian Gäbler, Christian Schliebe, Heinrich Lang

Abstract:

Highly branched dendrimers provide structural homogeneity, controlled composition, comparable size to biomolecules, internal porosity and multiple functional groups for conjugating reactions. Electro-active dendrimers containing multiple redox units have generated great interest in their use as electrode modifiers for development of biosensors. The electron transfer between the redox-active dendrimers and the biomolecules play a key role in developing a biosensor. Ferrocenes have multiple and electrochemically equivalent redox units that can act as electron “pool” in a system. The ferrocenyl-terminated polyamidoamine dendrimer is capable of transferring multiple numbers of electrons under the same applied potential. Therefore, they can be used for dual purposes: one in building a film over the electrode for immunosensors and the other for immobilizing biomolecules for sensing. Electrochemical immunosensor, thus developed, exhibit fast and sensitive analysis, inexpensive and involve no prior sample pre-treatment. Electrochemical amperometric immunosensors are even more promising because they can achieve a very low detection limit with high sensitivity. Detection of the cancer biomarkers at an early stage can provide crucial information for foundational research of life science, clinical diagnosis and prevention of disease. Elevated concentration of biomarkers in body fluid is an early indication of some type of cancerous disease and among all the biomarkers, IgG is the most common and extensively used clinical cancer biomarkers. We present an IgG (=immunoglobulin) electrochemical immunosensor using a newly synthesized redox-active ferrocenyl dendrimer of generation 2 (G2Fc) as glassy carbon electrode material for immobilizing the antibody. The electrochemical performance of the modified electrodes was assessed in both aqueous and non-aqueous media using varying scan rates to elucidate the reaction mechanism. The potential shift was found to be higher in an aqueous electrolyte due to presence of more H-bond which reduced the electrostatic attraction within the amido groups of the dendrimers. The cyclic voltammetric studies of the G2Fc-modified GCE in 0.1 M PBS solution of pH 7.2 showed a pair of well-defined redox peaks. The peak current decreased significantly with the immobilization of the anti-goat IgG. After the immunosensor is blocked with BSA, a further decrease in the peak current was observed due to the attachment of the protein BSA to the immunosensor. A significant decrease in the current signal of the BSA/anti-IgG/G2Fc/GCE was observed upon immobilizing IgG which may be due to the formation of immune-conjugates that blocks the tunneling of mass and electron transfer. The current signal was found to be directly related to the amount of IgG captured on the electrode surface. With increase in the concentration of IgG, there is a formation of an increasing amount of immune-conjugates that decreased the peak current. The incubation time and concentration of the antibody was optimized for better analytical performance of the immunosensor. The developed amperometric immunosensor is sensitive to IgG concentration as low as 2 ng/mL. Tailoring of redox-active dendrimers provides enhanced electroactivity to the system and enlarges the sensor surface for binding the antibodies. It may be assumed that both electron transfer and diffusion contribute to the signal transformation between the dendrimers and the antibody.

Keywords: ferrocenyl dendrimers, electrochemical immunosensors, immunoglobulin, amperometry

Procedia PDF Downloads 304
24 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation

Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya

Abstract:

The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.

Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity

Procedia PDF Downloads 152