Search results for: modified pacific South west inter agency committee model (MPSIAC)
107 When It Wasn’t There: Understanding the Importance of High School Sports
Authors: Karen Chad, Louise Humbert, Kenzie Friesen, Dave Sandomirsky
Abstract:
Background: The pandemic of COVID-19 presented many historical challenges to the sporting community. For organizations and individuals, sport was put on hold resulting in social, economic, physical, and mental health consequences for all involved. High school sports are seen as an effective and accessible pathway for students to receive health, social, and academic benefits. Studies examining sport cessation due to COVID-19 found substantial negative outcomes on the physical and mental well-being of participants in the high school setting. However, the pandemic afforded an opportunity to examine sport participation and the value people place upon their engagement in high school sport. Study objectives: (1) Examine the experiences of students, parents, administrators, officials, and coaches during a year without high school sports; (2) Understand why participants are involved in high school sports; and (3) Learn what supports are needed for future involvement. Methodology: A mixed method design was used, including semi-structured interviews and a survey (SurveyMonkey software), which was disseminated electronically to high school students, coaches, school administrators, parents, and officials. Results: 1222 respondents completed the survey. Findings showed: (1) 100% of students participate in high school sports to improve their mental health, with >95% said it keeps them active and healthy, helps them make friends and teaches teamwork, builds confidence and positive self-perceptions, teaches resiliency, enhances connectivity to their school, and supports academic learning; (2) Top three reasons teachers coach is their desire to make a difference in the lives of students, enjoyment, and love of the sport, and to give back. Teachers said what they enjoy most is contributing to and watching athletes develop, direct involvement with student sport success, and the competitiveatmosphere; (3) 90% of parents believe playing sports is a valuable experience for their child, 95% said it enriches student academic learning and educational experiences, and 97% encouraged their child to play school sports; (4) Officials participate because of their enjoyment and love of the sport, experience, and expertise, desire to make a difference in the lives of children, the competitive/sporting atmosphere and growing the sport. 4% of officials said it was financially motivated; (5) 100% of administrators said high school sports are important for everyone. 80% believed the pandemic will decrease teachers coaching and increase student mental health and well-being. When there was no sport, many athletes got a part-time job and tried to stay active, with limited success. Coaches, officials, and parents spent more time with family. All participants did little physical activity, were bored; and struggled with mental health and poor physical health. Respondents recommended better communication, promotion, and branding of high school sport benefits, equitable funding for all sports, athlete development, compensation and recognition for coaching, and simple processes to strengthen the high school sport model. Conclusions: High school sport is an effective vehicle for athletes, parents, coaches, administrators, and officials to derive many positive outcomes. When it is taken away, serious consequences prevail. Paying attention to important success factors will be important for the effectiveness of high school sports.Keywords: physical activity, high school, sports, pandemic
Procedia PDF Downloads 145106 Regenerating Habitats. A Housing Based on Modular Wooden Systems
Authors: Rui Pedro de Sousa Guimarães Ferreira, Carlos Alberto Maia Domínguez
Abstract:
Despite the ambitions to achieve climate neutrality by 2050, to fulfill the Paris Agreement's goals, the building and construction sector remains one of the most resource-intensive and greenhouse gas-emitting industries in the world, accounting for 40% of worldwide CO ₂ emissions. Over the past few decades, globalization and population growth have led to an exponential rise in demand in the housing market and, by extension, in the building industry. Considering this housing crisis, it is obvious that we will not stop building in the near future. However, the transition, which has already started, is challenging and complex because it calls for the worldwide participation of numerous organizations in altering how building systems, which have been a part of our everyday existence for over a century, are used. Wood is one of the alternatives that is most frequently used nowadays (under responsible forestry conditions) because of its physical qualities and, most importantly, because it produces fewer carbon emissions during manufacturing than steel or concrete. Furthermore, as wood retains its capacity to store CO ₂ after application and throughout the life of the building, working as a natural carbon filter, it helps to reduce greenhouse gas emissions. After a century-long focus on other materials, in the last few decades, technological advancements have made it possible to innovate systems centered around the use of wood. However, there are still some questions that require further exploration. It is necessary to standardize production and manufacturing processes based on prefabrication and modularization principles to achieve greater precision and optimization of the solutions, decreasing building time, prices, and waste from raw materials. In addition, this approach will make it possible to develop new architectural solutions to solve the rigidity and irreversibility of buildings, two of the most important issues facing housing today. Most current models are still created as inflexible, fixed, monofunctional structures that discourage any kind of regeneration, based on matrices that sustain the conventional family's traditional model and are founded on rigid, impenetrable compartmentalization. Adaptability and flexibility in housing are, and always have been, necessities and key components of architecture. People today need to constantly adapt to their surroundings and themselves because of the fast-paced, disposable, and quickly obsolescent nature of modern items. Migrations on a global scale, different kinds of co-housing, or even personal changes are some of the new questions that buildings have to answer. Designing with the reversibility of construction systems and materials in mind not only allows for the concept of "looping" in construction, with environmental advantages that enable the development of a circular economy in the sector but also unleashes multiple social benefits. In this sense, it is imperative to develop prefabricated and modular construction systems able to address the formalization of a reversible proposition that adjusts to the scale of time and its multiple reformulations, many of which are unpredictable. We must allow buildings to change, grow, or shrink over their lifetime, respecting their nature and, finally, the nature of the people living in them. It´s the ability to anticipate the unexpected, adapt to social factors, and take account of demographic shifts in society to stabilize communities, the foundation of real innovative sustainability.Keywords: modular, timber, flexibility, housing
Procedia PDF Downloads 78105 Adapting Hazard Analysis and Critical Control Points (HACCP) Principles to Continuing Professional Education
Authors: Yaroslav Pavlov
Abstract:
In the modern world, ensuring quality has become increasingly important in various fields of human activity. One universal approach to quality management, proven effective in the food industry, is the HACCP (Hazard Analysis and Critical Control Points) concept. Based on principles of preventing potential hazards to consumers at all stages of production, from raw materials to the final product, HACCP offers a systematic approach to identifying, assessing risks, and managing critical control points (CCPs). Initially used primarily for food production, it was later effectively adapted to the food service sector. Implementing HACCP provides organizations with a reliable foundation for improving food safety, covering all links in the food chain from producer to consumer, making it an integral part of modern quality management systems. The main principles of HACCP—hazard identification, CCP determination, effective monitoring procedures, corrective actions, regular checks, and documentation—are universal and can be adapted to other areas. The adaptation of the HACCP concept is relevant for continuing professional education (CPE) with certain reservations. Specifically, it is reasonable to abandon the term ‘hazards’ as deviations in CCPs do not pose dangers, unlike in food production. However, the approach through CCP analysis and the use of HACCP's main principles for educational services are promising. This is primarily because it allows for identifying key CCPs based on the value creation model of a specific educational organization and consequently focusing efforts on specific CCPs to manage the quality of educational services. This methodology can be called the Analysis of Critical Points in Educational Services (ACPES). ACPES offers a similar approach to managing the quality of educational services, focusing on preventing and eliminating potential risks that could negatively impact the educational process, learners' achievement of set educational goals, and ultimately lead to students rejecting the organization's educational services. ACPES adapts proven HACCP principles to educational services, enhancing quality management effectiveness and student satisfaction. ACPES includes identifying potential problems at all stages of the educational process, from initial interest to graduation and career development. In ACPES, the term "hazards" is replaced with "problematic areas," reflecting the specific nature of the educational environment. Special attention is paid to determining CCPs—stages where corrective measures can most effectively prevent or minimize the risk of failing educational goals. The ACPES principles align with HACCP's principles, adjusted for the specificities of CPE. The method of the learner's journey map (variation of Customer Journey Map, CJM) can be used to overcome the complexity of formalizing the production chain in educational services. CJM provides a comprehensive understanding of the learner's experience at each stage, facilitating targeted and effective quality management. Thus, integrating the learner's journey map into ACPES represents a significant extension of the methodology's capabilities, ensuring a comprehensive understanding of the educational process and forming an effective quality management system focused on meeting learners' needs and expectations.Keywords: quality management, continuing professional education, customer journey map, HACCP
Procedia PDF Downloads 37104 Photobleaching Kinetics and Epithelial Distribution of Hexylaminoleuilinate Induced PpIX in Rat Bladder Cancer
Authors: Sami El Khatib, Agnès Leroux, Jean-Louis Merlin, François Guillemin, Marie-Ange D’Hallewin
Abstract:
Photodynamic therapy (PDT) is a treatment modality based on the cytotoxic effect occurring on the target tissues by interaction of a photosensitizer with light in the presence of oxygen. One of the major advances in PDT can be attributed to the use of topical aminolevulinic (ALA) to induce Protoporphyrin IX (PpIX) for the treatment of early stage cancers as well as diagnosis. ALA is a precursor of the heme synthesis pathway. Locally delivered to the target tissue ALA overcomes the negative feedback exerted by heme and promotes the transient formation of PpIX in situ to reach critical effective levels in cells and tissue. Whereas early steps of the heme pathway occur in the cytosol, PpIX synthesis is shown to be held in the mitochondrial membranes and PpIX fluorescence is expected to accumulate in close vicinity of the initial building site and to progressively diffuse to the neighboring cytoplasmic compartment or other lipophylic organelles. PpIX is known to be highly reactive and will be degraded when irradiated with light. PpIX photobleaching is believed to be governed by a singlet oxygen mediated mechanism in the presence of oxidized amino acids and proteins. PpIX photobleaching and subsequent spectral phototransformation were described widely in tumor cells incubated in vitro with ALA solution, or ex vivo in human and porcine mucosa superfused with hexylaminolevulinate (hALA). PpIX photobleaching was also studied in vivo, using animal models such as normal or tumor mice skin and orthotopic rat bladder model. Hexyl aminolevulinate a more potent lipophilic derivative of ALA was proposed as an adjunct to standard cystoscopy in the fluorescence diagnosis of bladder cancer and other malignancies. We have previously reported the effectiveness of hALA mediated PDT of rat bladder cancer. Although normal and tumor bladder epithelium exhibit similar fluorescence intensities after intravesical instillation of two hALA concentrations (8 and 16 mM), the therapeutic response at 8mM and 20J/cm2 was completely different from the one observed at 16mM irradiated with the same light dose. Where the tumor is destroyed, leaving the underlying submucosa and muscle intact after an 8 mM instillation, 16mM sensitization and subsequent illumination results in the complete destruction of the underlying bladder wall but leaves the tumor undamaged. The object of the current study is to try to unravel the underlying mechanism for this apparent contradiction. PpIX extraction showed identical amounts of photosensitizer in tumor bearing bladders at both concentrations. Photobleaching experiments revealed mono-exponential decay curves in both situations but with a two times faster decay constant in case of 16mM bladders. Fluorescence microscopy shows an identical fluorescence pattern for normal bladders at both concentrations and tumor bladders at 8mM with bright spots. Tumor bladders at 16 mM exhibit a more diffuse cytoplasmic fluorescence distribution. The different response to PDT with regard to the initial pro-drug concentration can thus be attributed to the different cellular localization.Keywords: bladder cancer, hexyl-aminolevulinate, photobleaching, confocal fluorescence microscopy
Procedia PDF Downloads 407103 Runoff Estimates of Rapidly Urbanizing Indian Cities: An Integrated Modeling Approach
Authors: Rupesh S. Gundewar, Kanchan C. Khare
Abstract:
Runoff contribution from urban areas is generally from manmade structures and few natural contributors. The manmade structures are buildings; roads and other paved areas whereas natural contributors are groundwater and overland flows etc. Runoff alleviation is done by manmade as well as natural storages. Manmade storages are storage tanks or other storage structures such as soakways or soak pits which are more common in western and European countries. Natural storages are catchment slope, infiltration, catchment length, channel rerouting, drainage density, depression storage etc. A literature survey on the manmade and natural storages/inflow has presented percentage contribution of each individually. Sanders et.al. in their research have reported that a vegetation canopy reduces runoff by 7% to 12%. Nassif et el in their research have reported that catchment slope has an impact of 16% on bare standard soil and 24% on grassed soil on rainfall runoff. Infiltration being a pervious/impervious ratio dependent parameter is catchment specific. But a literature survey has presented a range of 15% to 30% loss of rainfall runoff in various catchment study areas. Catchment length and channel rerouting too play a considerable role in reduction of rainfall runoff. Ground infiltration inflow adds to the runoff where the groundwater table is very shallow and soil saturates even in a lower intensity storm. An approximate percent contribution through this inflow and surface inflow contributes to about 2% of total runoff volume. Considering the various contributing factors in runoff it has been observed during a literature survey that integrated modelling approach needs to be considered. The traditional storm water network models are able to predict to a fair/acceptable degree of accuracy provided no interaction with receiving water (river, sea, canal etc), ground infiltration, treatment works etc. are assumed. When such interactions are significant then it becomes difficult to reproduce the actual flood extent using the traditional discrete modelling approach. As a result the correct flooding situation is very rarely addressed accurately. Since the development of spatially distributed hydrologic model the predictions have become more accurate at the cost of requiring more accurate spatial information.The integrated approach provides a greater understanding of performance of the entire catchment. It enables to identify the source of flow in the system, understand how it is conveyed and also its impact on the receiving body. It also confirms important pain points, hydraulic controls and the source of flooding which could not be easily understood with discrete modelling approach. This also enables the decision makers to identify solutions which can be spread throughout the catchment rather than being concentrated at single point where the problem exists. Thus it can be concluded from the literature survey that the representation of urban details can be a key differentiator to the successful understanding of flooding issue. The intent of this study is to accurately predict the runoff from impermeable areas from urban area in India. A representative area has been selected for which data was available and predictions have been made which are corroborated with the actual measured data.Keywords: runoff, urbanization, impermeable response, flooding
Procedia PDF Downloads 250102 Study of the Diaphragm Flexibility Effect on the Inelastic Seismic Response of Thin Wall Reinforced Concrete Buildings (TWRCB): A Purpose to Reduce the Uncertainty in the Vulnerability Estimation
Authors: A. Zapata, Orlando Arroyo, R. Bonett
Abstract:
Over the last two decades, the growing demand for housing in Latin American countries has led to the development of construction projects based on low and medium-rise buildings with thin reinforced concrete walls. This system, known as Thin Walls Reinforced Concrete Buildings (TWRCB), uses walls with thicknesses from 100 to 150 millimetres, with flexural reinforcement formed by welded wire mesh (WWM) with diameters between 5 and 7 millimetres, arranged in one or two layers. These walls often have irregular structural configurations, including combinations of rectangular shapes. Experimental and numerical research conducted in regions where this structural system is commonplace indicates inherent weaknesses, such as limited ductility due to the WWM reinforcement and thin element dimensions. Because of its complexity, numerical analyses have relied on two-dimensional models that don't explicitly account for the floor system, even though it plays a crucial role in distributing seismic forces among the resilient elements. Nonetheless, the numerical analyses assume a rigid diaphragm hypothesis. For this purpose, two study cases of buildings were selected, low-rise and mid-rise characteristics of TWRCB in Colombia. The buildings were analyzed in Opensees using the MVLEM-3D for walls and shell elements to simulate the slabs to involve the effect of coupling diaphragm in the nonlinear behaviour. Three cases are considered: a) models without a slab, b) models with rigid slabs, and c) models with flexible slabs. An incremental static (pushover) and nonlinear dynamic analyses were carried out using a set of 44 far-field ground motions of the FEMA P-695, scaled to 1.0 and 1.5 factors to consider the probability of collapse for the design base earthquake (DBE) and the maximum considered earthquake (MCE) for the model, according to the location sites and hazard zone of the archetypes in the Colombian NSR-10. Shear base capacity, maximum displacement at the roof, walls shear base individual demands and probabilities of collapse were calculated, to evaluate the effect of absence, rigid and flexible slabs in the nonlinear behaviour of the archetype buildings. The pushover results show that the building exhibits an overstrength between 1.1 to 2 when the slab is considered explicitly and depends on the structural walls plan configuration; additionally, the nonlinear behaviour considering no slab is more conservative than if the slab is represented. Include the flexible slab in the analysis remarks the importance to consider the slab contribution in the shear forces distribution between structural elements according to design resistance and rigidity. The dynamic analysis revealed that including the slab reduces the collapse probability of this system due to have lower displacements and deformations, enhancing the safety of residents and the seismic performance. The strategy of including the slab in modelling is important to capture the real effect on the distribution shear forces in walls due to coupling to estimate the correct nonlinear behaviour in this system and the adequate distribution to proportionate the correct resistance and rigidity of the elements in the design to reduce the possibility of damage to the elements during an earthquake.Keywords: thin wall reinforced concrete buildings, coupling slab, rigid diaphragm, flexible diaphragm
Procedia PDF Downloads 74101 Top-Down, Middle-Out, Bottom-Up: A Design Approach to Transforming Prison
Authors: Roland F. Karthaus, Rachel S. O'Brien
Abstract:
Over the past decade, the authors have undertaken applied research aimed at enabling transformation within the prison service to improve conditions and outcomes for those living, working and visiting in prisons in the UK and the communities they serve. The research has taken place against a context of reducing resources and public discontent at increasing levels of violence, deteriorating conditions and persistently high levels of re-offending. Top-down governmental policies have mainly been ineffectual and in some cases counter-productive. The prison service is characterised by hierarchical organisation, and the research has applied design thinking at multiple levels to challenge and precipitate change: top-down, middle-out and bottom-up. The research employs three distinct but related approaches, system design (top-down): working at the national policy level to analyse the changing policy context, identifying opportunities and challenges; engaging with the Ministry of Justice commissioners and sector organisations to facilitate debate, introducing new evidence and provoking creative thinking, place-based design (middle-out): working with individual prison establishments as pilots to illustrate and test the potential for local empowerment, creative change, and improved architecture within place-specific contexts and organisational hierarchies, everyday design (bottom-up): working with individuals in the system to explore the potential for localised, significant, demonstrator changes; including collaborative design, capacity building and empowerment in skills, employment, communication, training, and other activities. The research spans a series of projects, through which the methodological approach has developed responsively. The projects include a place-based model for the re-purposing of Ministry of Justice land assets for the purposes of rehabilitation; an evidence-based guide to improve prison design for health and well-being; capacity-based employment, skills and self-build project as a template for future open prisons. The overarching research has enabled knowledge to be developed and disseminated through policy and academic networks. Whilst the research remains live and continuing; key findings are emerging as a basis for a new methodological approach to effecting change in the UK prison service. An interdisciplinary approach is necessary to overcome the barriers between distinct areas of the prison service. Sometimes referred to as total environments, prisons encompass entire social and physical environments which themselves are orchestrated by institutional arms of government, resulting in complex systems that cannot be meaningfully engaged through narrow disciplinary lenses. A scalar approach is necessary to connect strategic policies with individual experiences and potential, through the medium of individual prison establishments, operating as discrete entities within the system. A reflexive process is necessary to connect research with action in a responsive mode, learning to adapt as the system itself is changing. The role of individuals in the system, their latent knowledge and experience and their ability to engage and become agents of change are essential. Whilst the specific characteristics of the UK prison system are unique, the approach is internationally applicable.Keywords: architecture, design, policy, prison, system, transformation
Procedia PDF Downloads 133100 Memories of Lost Fathers: The Unfinished Transmission of Generational Values in Hungarian Cinema by Peter Falanga
Authors: Peter Falanga
Abstract:
During the process of de-Stalinization that began in 1956 with the Twentieth Congress of the Soviet Communist Party, many filmmakers in Hungary chose to explore their country’s political discomforts by using Socialist Realism as a negative model against which they could react to the dominating ideology. A renewed national film industry and a more permissive political regime would allow filmmakers to take to task the plight of the preceding generation who had experienced the fatal political turmoil of both World Wars and the purges of Stalin. What follows is no longer the multigenerational unity found in Socialist Realism wherein both the old and the young embrace Stalin’s revolutionary optimism; instead, the protagonists are parentless, and thus their connection to the previous generation is partially severed. In these films, violent historical forces leave one generation to search for both a connection with their family’s past, and for moral guidance to direct their future. István Szabó’s Father (1966), Márta Mészáros Diary for My Children (1984), and Pál Gábor’s Angi Vera (1978) each consider the fraught relationship between successive generations through the lens of postwar youth. A characteristic each of their protagonist’s share is that they are all missing one or both parents, and cope with familial loss either through recalling memories of their parents in dream-like sequences, or, in the case of Angi Vera, through embracing the surrogate paternalism that the Communist Party promises to provide. This paper considers the argument these films present about the progress of Hungarian history, and how this topic is explored in more recent films that similarly focus on the transmission of generational values. Scholars such as László Strausz and John Cunningham have written on the continuous concern with the transmission of generational values in more recent films such as István Szabó’s Sunshine (1999), Béla Tarr’s Werckmeister Harmonies (2000), György Pálfi’s Taxidermia (2006), Ágnes Kocsis’ Pál Adrienn (2010), and Kornél Mundruczó’s Evolution (2021). These films, they argue, make intimate portrayals of the various sweeping political changes in Hungary’s history and question how these epochs or events have impacted Hungarian identities. If these films attempt to personalize historical shifts of Hungary, then what is the significance of featuring characters who have lost one or both parents? An attempt to understand this coherent trend in Hungarian cinema will profit from examining the earlier, celebrated films of Szabó, Mészáros, and Gábor, who inaugurated this preoccupation with generational values. The pervasive interplay of dreams and memory in their films invites an additional element to their argument concerning historical progression. This paper incorporates Richard Teniman’s notion of the “dialectics of memory” in which memory is in a constant process of negation and reinvention to explain why these Directors prefer to explore Hungarian identity through the disarranged form of psychological realism over the linear causality structure of historical realism.Keywords: film theory, Eastern European Studies, film history, Eastern European History
Procedia PDF Downloads 12299 Structural Monitoring of Externally Confined RC Columns with Inadequate Lap-Splices, Using Fibre-Bragg-Grating Sensors
Authors: Petros M. Chronopoulos, Evangelos Z. Astreinidis
Abstract:
A major issue of the structural assessment and rehabilitation of existing RC structures is the inadequate lap-splicing of the longitudinal reinforcement. Although prohibited by modern Design Codes, the practice of arranging lap-splices inside the critical regions of RC elements was commonly applied in the past. Today this practice is still the rule, at least for conventional new buildings. Therefore, a lot of relevant research is ongoing in many earthquake prone countries. The rehabilitation of deficient lap-splices of RC elements by means of external confinement is widely accepted as the most efficient technique. If correctly applied, this versatile technique offers a limited increase of flexural capacity and a considerable increase of local ductility and of axial and shear capacities. Moreover, this intervention does not affect the stiffness of the elements and does not affect the dynamic characteristics of the structure. This technique has been extensively discussed and researched contributing to vast accumulation of technical and scientific knowledge that has been reported in relevant books, reports and papers, and included in recent Design Codes and Guides. These references are mostly dealing with modeling and redesign, covering both the enhanced (axial and) shear capacity (due to the additional external closed hoops or jackets) and the increased ductility (due to the confining action, preventing the unzipping of lap-splices and the buckling of continuous reinforcement). An analytical and experimental program devoted to RC members with lap-splices is completed in the Lab. of RC/NTU of Athens/GR. This program aims at the proposal of a rational and safe theoretical model and the calibration of the relevant Design Codes’ provisions. Tests, on forty two (42) full scale specimens, covering mostly beams and columns (not walls), strengthened or not, with adequate or inadequate lap-splices, have been already performed and evaluated. In this paper, the results of twelve (12) specimens under fully reversed cyclic actions are presented and discussed. In eight (8) specimens the lap-splices were inadequate (splicing length of 20 or 30 bar diameters) and they were retrofitted before testing by means of additional external confinement. The two (2) most commonly applied confining materials were used in this study, namely steel and FRPs. More specifically, jackets made of CFRP wraps or light cages made of mild steel were applied. The main parameters of these tests were (i) the degree of confinement (internal and external), and (ii) the length of lap-splices, equal to 20, 30 or 45 bar diameters. These tests were thoroughly instrumented and monitored, by means of conventional (LVDTs, strain gages, etc.) and innovative (optic fibre-Bragg-grating) sensors. This allowed for a thorough investigation of the most influencing design parameter, namely the hoop-stress developed in the confining material. Based on these test results and on comparisons with the provisions of modern Design Codes, it could be argued that shorter (than the normative) lap-splices, commonly found in old structures, could still be effective and safe (at least for lengths more than an absolute minimum), depending on the required ductility, if a properly arranged and adequately detailed external confinement is applied.Keywords: concrete, fibre-Bragg-grating sensors, lap-splices, retrofitting / rehabilitation
Procedia PDF Downloads 25098 Foucault and Governmentality: International Organizations and State Power
Authors: Sara Dragisic
Abstract:
Using the theoretical analysis of the birth of biopolitics that Foucault performed through the history of liberalism and neoliberalism, in this paper we will try to show how, precisely through problematizing the role of international institutions, the model of governance differs from previous ways of objectifying body and life. Are the state and its mechanisms still a Leviathan to fight against, or can it be even the driver of resistance against the proponents of modern governance and the biopolitical power? Do paradigmatic examples of biopolitics still appear through sovereignty and (international) law, or is it precisely this sphere that shows a significant dose of incompetence and powerlessness in relation to, not only the economic sphere (Foucault’s critique of neoliberalism) but also the new politics of freedom? Have the struggle for freedom and human rights, as well as the war on terrorism, opened a new spectrum of biopolitical processes, which are manifested precisely through new international institutions and humanitarian discourse? We will try to answer these questions, in the following way. On the one hand, we will show that the views of authors such as Agamben and Hardt and Negri, in whom the state and sovereignty are seen as enemies to be defeated or overcome, fail to see how such attempts could translate into the politicization of life like it is done in many examples through the doctrine of liberal interventionism and humanitarianism. On the other hand, we will point out that it is precisely the humanitarian discourse and the defense of the right to intervention that can be the incentive and basis for the politicization of the category of life and lead to the selective application of human rights. Zizek example of the killing of United Nations workers and doctors in a village during the Vietnam War, who were targeted even before police or soldiers, because they were precisely seen as a powerful instrument of American imperialism (as they were sincerely trying to help the population), will be focus of this part of the analysis. We’ll ask the question whether such interpretation is a kind of liquidation of the extreme left of the political (Laclau) or on this basis can be explained at least in part the need to review the functioning of international organizations, ranging from those dealing with humanitarian aid (and humanitarian military interventions) to those dealing with protection and the security of the population, primarily from growing terrorism. Based on the above examples, we will also explain how the discourse of terrorism itself plays a dual role: it can appear as a tool of liberal biopolitics, although, more superficially, it mostly appears as an enemy that wants to destroy the liberal system and its values. This brings us to the basic problem that this paper will tackle: do the mechanisms of institutional struggle for human rights and freedoms, which is often seen as opposed to the security mechanisms of the state, serve the governance of citizens in such a way that the latter themselves participate in producing biopolitical governmental practices? Is the freedom today "nothing but the correlative development of apparatuses of security" (Foucault)? Or, we can continue this line of Foucault’s argumentation and enhance the interpretation with the important question of what precisely today reflects the change in the rationality of governance in which society is transformed from a passive object into a subject of its own production. Finally, in order to understand the skills of biopolitical governance in modern civil society, it is necessary to pay attention to the status of international organizations, which seem to have become a significant place for the implementation of global governance. In this sense, the power of sovereignty can turn out to be an insufficiently strong power of security policy, which can go hand in hand with freedom policies, through neoliberal governmental techniques.Keywords: neoliberalism, Foucault, sovereignty, biopolitics, international organizations, NGOs, Agamben, Hardt&Negri, Zizek, security, state power
Procedia PDF Downloads 20697 Selected Macrophyte Populations Promotes Coupled Nitrification and Denitrification Function in Eutrophic Urban Wetland Ecosystem
Authors: Rupak Kumar Sarma, Ratul Saikia
Abstract:
Macrophytes encompass major functional group in eutrophic wetland ecosystems. As a key functional element of freshwater lakes, they play a crucial role in regulating various wetland biogeochemical cycles, as well as maintain the biodiversity at the ecosystem level. The high carbon-rich underground biomass of macrophyte populations may harbour diverse microbial community having significant potential in maintaining different biogeochemical cycles. The present investigation was designed to study the macrophyte-microbe interaction in coupled nitrification and denitrification, considering Deepor Beel Lake (a Ramsar conservation site) of North East India as a model eutrophic system. Highly eutrophic sites of Deepor Beel were selected based on sediment oxygen demand and inorganic phosphorus and nitrogen (P&N) concentration. Sediment redox potential and depth of the lake was chosen as the benchmark for collecting the plant and sediment samples. The average highest depth in winter (January 2016) and summer (July 2016) were recorded as 20ft (6.096m) and 35ft (10.668m) respectively. Both sampling depth and sampling seasons had the distinct effect on variation in macrophyte community composition. Overall, the dominant macrophytic populations in the lake were Nymphaea alba, Hydrilla verticillata, Utricularia flexuosa, Vallisneria spiralis, Najas indica, Monochoria hastaefolia, Trapa bispinosa, Ipomea fistulosa, Hygrorhiza aristata, Polygonum hydropiper, Eichhornia crassipes and Euryale ferox. There was a distinct correlation in the variation of major sediment physicochemical parameters with change in macrophyte community compositions. Quantitative estimation revealed an almost even accumulation of nitrate and nitrite in the sediment samples dominated by the plant species Eichhornia crassipes, Nymphaea alba, Hydrilla verticillata, Vallisneria spiralis, Euryale ferox and Monochoria hastaefolia, which might have signified a stable nitrification and denitrification process in the sites dominated by the selected aquatic plants. This was further examined by a systematic analysis of microbial populations through culture dependent and independent approach. Culture-dependent bacterial community study revealed the higher population of nitrifiers and denitrifiers in the sediment samples dominated by the six macrophyte species. However, culture-independent study with bacterial 16S rDNA V3-V4 metagenome sequencing revealed the overall similar type of bacterial phylum in all the sediment samples collected during the study. Thus, there might be the possibility of uneven distribution of nitrifying and denitrifying molecular markers among the sediment samples collected during the investigation. The diversity and abundance of the nitrifying and denitrifying molecular markers in the sediment samples are under investigation. Thus, the role of different aquatic plant functional types in microorganism mediated nitrogen cycle coupling could be screened out further from the present initial investigation.Keywords: denitrification, macrophyte, metagenome, microorganism, nitrification
Procedia PDF Downloads 17396 Official Game Account Analysis: Factors Influence Users' Judgments in Limited-Word Posts
Authors: Shanhua Hu
Abstract:
Social media as a critical propagandizing form of film, video games, and digital products has received substantial research attention, but there exists several critical barriers such as: (1) few studies exploring the internal and external connections of a product as part of the multimodal context that gives rise to readability and commercial return; (2) the lack of study of multimodal analysis in product’s official account of game publishers and its impact on users’ behaviors including purchase intention, social media engagement, and playing time; (3) no standardized ecologically-valid, game type-varying data can be used to study the complexity of official account’s postings within a time period. This proposed research helps to tackle these limitations in order to develop a model of readability study that is more ecologically valid, robust, and thorough. To accomplish this objective, this paper provides a more diverse dataset comprising different visual elements and messages collected from the official Twitter accounts of the Top 20 best-selling games of 2021. Video game companies target potential users through social media, a popular approach is to set up an official account to maintain exposure. Typically, major game publishers would create an official account on Twitter months before the game's release date to update on the game's development, announce collaborations, and reveal spoilers. Analyses of tweets from those official Twitter accounts would assist publishers and marketers in identifying how to efficiently and precisely deploy advertising to increase game sales. The purpose of this research is to determine how official game accounts use Twitter to attract new customers, specifically which types of messages are most effective at increasing sales. The dataset includes the number of days until the actual release date on Twitter posts, the readability of the post (Flesch Reading Ease Score, FRES), the number of emojis used, the number of hashtags, the number of followers of the mentioned users, the categorization of the posts (i.e., spoilers, collaborations, promotions), and the number of video views. The timeline of Twitter postings from official accounts will be compared to the history of pre-orders and sales figures to determine the potential impact of social media posts. This study aims to determine how the above-mentioned characteristics of official accounts' Twitter postings influence the sales of the game and to examine the possible causes of this influence. The outcome will provide researchers with a list of potential aspects that could influence people's judgments in limited-word posts. With the increased average online time, users would adapt more quickly than before in online information exchange and readings, such as the word to use sentence length, and the use of emojis or hashtags. The study on the promotion of official game accounts will not only enable publishers to create more effective promotion techniques in the future but also provide ideas for future research on the influence of social media posts with a limited number of words on consumers' purchasing decisions. Future research can focus on more specific linguistic aspects, such as precise word choice in advertising.Keywords: engagement, official account, promotion, twitter, video game
Procedia PDF Downloads 7695 Composite Electrospun Aligned PLGA/Curcumin/Heparin Nanofibrous Membranes for Wound Dressing Application
Authors: Jyh-Ping Chen, Yu-Tin Lai
Abstract:
Wound healing is a complicated process involving overlapping hemostasis, inflammation, proliferation, and maturation phases. Ideal wound dressings can replace native skin functions in full thickness skin wounds through faster healing rate and also by reducing scar formation. Poly(lactic-co-glycolic acid) (PLGA) is an U.S. FDA approved biodegradable polymer to be used as ideal wound dressing material. Several in vitro and in vivo studies have demonstrated the effectiveness of curcumin in decreasing the release of inflammatory cytokines, inhibiting enzymes associated with inflammations, and scavenging free radicals that are the major cause of inflammation during wound healing. Heparin has binding affinities to various growth factors. With the unique and beneficial features offered by those molecules toward the complex process of wound healing, we postulate a composite wound dressing constructed from PLGA, curcumin and heparin would be a good candidate to accelerate scarless wound healing. In this work, we use electrospinning to prepare curcumin-loaded aligned PLGA nanofibrous membranes (PC NFMs). PC NFMs were further subject to oxygen plasma modification and surfaced-grafted with heparin through carbodiimide-mediated covalent bond formation to prepare curcumin-loaded PLGA-g-heparin (PCH) NFMs. The nanofibrous membranes could act as three-dimensional scaffolds to attract fibroblast migration, reduce inflammation, and increase wound-healing related growth factors concentrations at wound sites. From scanning electron microscopy analysis, the nanofibers in each NFM are with diameters ranging from 456 to 479 nm and with alignment angles within 0.5°. The NFMs show high tensile strength and good water absorptivity and provide suitable pore size for nutrients/wastes transport. Exposure of human dermal fibroblasts to the extraction medium of PC or PCH NFM showed significant protective effects against hydrogen peroxide than PLGA NFM. In vitro wound healing assays also showed that the extraction medium of PCH NFM showed significantly better migration ability toward fibroblasts than PC NFM, which is further better than PLGA NFM. The in vivo healing efficiency of the NFMs was further evaluated by a full thickness excisional wound healing diabetic rat model. After 14 days, PCH NFMs exhibits 86% wound closure rate, which is significantly different from other groups (79% for PC and 73% for PLGA NFM). Real-time PCR analysis indicated PC and PCH NFMs down regulated anti-oxidative enzymes like glutathione peroxidase (GPx) and superoxide dismutase (SOD), which are well-known transcription factors involved in cellular inflammatory responses to stimuli. From histology, the wound area treated with PCH NFMs showed more vascular lumen formation from immunohistochemistry of α-smooth muscle actin. The wound site also had more collagen type III (65.8%) expression and less collagen type I (3.5%) expression, indicating scar-less wound healing. From Western blot analysis, the PCH NFM showed good affinity toward growth factors from increased concentration of transforming growth factor-β (TGF-β) and fibroblast growth factor-2 (FGF-2) at the wound site to accelerate wound healing. From the results, we suggest PCH NFM as a promising candidate for wound dressing applications.Keywords: Curcumin, heparin, nanofibrous membrane, poly(lactic-co-glycolic acid) (PLGA), wound dressing
Procedia PDF Downloads 15594 Flexural Response of Sandwiches with Micro Lattice Cores Manufactured via Selective Laser Sintering
Authors: Emre Kara, Ali Kurşun, Halil Aykul
Abstract:
The lightweight sandwiches obtained with the use of various core materials such as foams, honeycomb, lattice structures etc., which have high energy absorbing capacity and high strength to weight ratio, are suitable for several applications in transport industry (automotive, aerospace, shipbuilding industry) where saving of fuel consumption, load carrying capacity increase, safety of vehicles and decrease of emission of harmful gases are very important aspects. While the sandwich structures with foams and honeycombs have been applied for many years, there is a growing interest on a new generation sandwiches with micro lattice cores. In order to produce these core structures, various production methods were created with the development of the technology. One of these production technologies is an additive manufacturing technique called selective laser sintering/melting (SLS/SLM) which is very popular nowadays because of saving of production time and achieving the production of complex topologies. The static bending and the dynamic low velocity impact tests of the sandwiches with carbon fiber/epoxy skins and the micro lattice cores produced via SLS/SLM were already reported in just a few studies. The goal of this investigation was the analysis of the flexural response of the sandwiches consisting of glass fiber reinforced plastic (GFRP) skins and the micro lattice cores manufactured via SLS under thermo-mechanical loads in order to compare the results in terms of peak load and absorbed energy values respect to the effect of core cell size, temperature and support span length. The micro lattice cores were manufactured using SLS technology that creates the product drawn by a 3D computer aided design (CAD) software. The lattice cores which were designed as body centered cubic (BCC) model having two different cell sizes (d= 2 and 2.5 mm) with the strut diameter of 0.3 mm were produced using titanium alloy (Ti6Al4V) powder. During the production of all the core materials, the same production parameters such as laser power, laser beam diameter, building direction etc. were kept constant. Vacuum Infusion (VI) method was used to produce skin materials, made of [0°/90°] woven S-Glass prepreg laminates. The combination of the core and skins were implemented under VI. Three point bending tests were carried out by a servo-hydraulic test machine with different values of support span distances (L = 30, 45, and 60 mm) under various temperature values (T = 23, 40 and 60 °C) in order to analyze the influences of support span and temperature values. The failure mode of the collapsed sandwiches has been investigated using 3D computed tomography (CT) that allows a three-dimensional reconstruction of the analyzed object. The main results of the bending tests are: load-deflection curves, peak force and absorbed energy values. The results were compared according to the effect of cell size, support span and temperature values. The obtained results have particular importance for applications that require lightweight structures with a high capacity of energy dissipation, such as the transport industry, where problems of collision and crash have increased in the last years.Keywords: light-weight sandwich structures, micro lattice cores, selective laser sintering, transport application
Procedia PDF Downloads 34093 Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre of Visual Arts in the Context of Agile, Lean and Hybrid Project Management Approaches
Authors: Maria Ledinskaya
Abstract:
This paper examines the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts in the context of Agile, Lean, and Hybrid project management. It is part case study and part literature review. To date, relatively little has been written about non-traditional project management approaches in heritage conservation. This paper seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation, by referencing their practical application on a recent museum-based conservation project. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre for Visual Arts by private collectors Michael and Joyce Morris. The first part introduces the chronological timeline and key elements of the project. It describes a medium-size conservation project of moderate complexity, which was planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown condition and materials, unconfirmed budget. The project was also impacted by the unknown unknowns of the COVID-19 pandemic, such as indeterminate lockdowns, and the need to accommodate social distancing and remote communications. The author, a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Collection Conservation Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. Subsequent sections examine the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment, due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Collection Conservation Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, as well as the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics, particularly with respect to change management, bespoke ethics, shared decision-making, and value-based cost-benefit conservation strategy. The author concludes that the Morris Collection Conservation Project had multiple Agile and Lean features which were instrumental to the successful delivery of the project. These key features are identified as distributed decision making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point largely in favour of a Hybrid model which combines traditional and alternative project processes and tools to suit the specific needs of the project.Keywords: project management, conservation, waterfall, agile, lean, hybrid
Procedia PDF Downloads 9992 Chatbots vs. Websites: A Comparative Analysis Measuring User Experience and Emotions in Mobile Commerce
Authors: Stephan Boehm, Julia Engel, Judith Eisser
Abstract:
During the last decade communication in the Internet transformed from a broadcast to a conversational model by supporting more interactive features, enabling user generated content and introducing social media networks. Another important trend with a significant impact on electronic commerce is a massive usage shift from desktop to mobile devices. However, a presentation of product- or service-related information accumulated on websites, micro pages or portals often remains the pivot and focal point of a customer journey. A more recent change of user behavior –especially in younger user groups and in Asia– is going along with the increasing adoption of messaging applications supporting almost real-time but asynchronous communication on mobile devices. Mobile apps of this type cannot only provide an alternative for traditional one-to-one communication on mobile devices like voice calls or short messaging service. Moreover, they can be used in mobile commerce as a new marketing and sales channel, e.g., for product promotions and direct marketing activities. This requires a new way of customer interaction compared to traditional mobile commerce activities and functionalities provided based on mobile web-sites. One option better aligned to the customer interaction in mes-saging apps are so-called chatbots. Chatbots are conversational programs or dialog systems simulating a text or voice based human interaction. They can be introduced in mobile messaging and social media apps by using rule- or artificial intelligence-based imple-mentations. In this context, a comparative analysis is conducted to examine the impact of using traditional websites or chatbots for promoting a product in an impulse purchase situation. The aim of this study is to measure the impact on the customers’ user experi-ence and emotions. The study is based on a random sample of about 60 smartphone users in the group of 20 to 30-year-olds. Participants are randomly assigned into two groups and participate in a traditional website or innovative chatbot based mobile com-merce scenario. The chatbot-based scenario is implemented by using a Wizard-of-Oz experimental approach for reasons of sim-plicity and to allow for more flexibility when simulating simple rule-based and more advanced artificial intelligence-based chatbot setups. A specific set of metrics is defined to measure and com-pare the user experience in both scenarios. It can be assumed, that users get more emotionally involved when interacting with a system simulating human communication behavior instead of browsing a mobile commerce website. For this reason, innovative face-tracking and analysis technology is used to derive feedback on the emotional status of the study participants while interacting with the website or the chatbot. This study is a work in progress. The results will provide first insights on the effects of chatbot usage on user experiences and emotions in mobile commerce environments. Based on the study findings basic requirements for a user-centered design and implementation of chatbot solutions for mobile com-merce can be derived. Moreover, first indications on situations where chatbots might be favorable in comparison to the usage of traditional website based mobile commerce can be identified.Keywords: chatbots, emotions, mobile commerce, user experience, Wizard-of-Oz prototyping
Procedia PDF Downloads 45891 Mesovarial Morphological Changes in Offspring Exposed to Maternal Cold Stress
Authors: Ariunaa.S., Javzandulam E., Chimegsaikhan S., Altantsetseg B., Oyungerel S., Bat-Erdene T., Naranbaatar S., Otgonbayar B., Suvdaa N., Tumenbayar B.
Abstract:
Introduction: Prenatal stress has been linked to heightened allergy sensitivity in offspring. However, there is a notable absence of research on the mesovarium structure of offspring born from mothers subjected to cold stress during pregnancy. Understanding the impact of maternal cold stress on the mesovarium structure could provide valuable insights into reproductive health outcomes in offspring. Objective: This study aims to investigate structural changes in the mesovarium of offspring born from cold-stress affected rats. Material and Methods: 20 female Westar rats weighing around 200g were chosen and evenly divided into four containers; then, 2-3 male rats were introduced to each container. The Papanicolaou method was used to estimate the spermatozoa and estrus period from vaginal swabs taken from female rats at 8:00 a.m. Female rats examined with the presence of spermatozoa during the estrous phase of the estrous cycle are defined as pregnant. Pregnant rats are divided into experimental and control groups. The experimental group was stressed using the model of severe and chronic cold stress for 30 days. They were exposed to cold stress for 3 hours each morning between 8:00 and 11:00 o’clock at a temperature of minus 15 degrees Celsius. The control group was kept under normal laboratory conditions. Newborn female rats from both experimental and control groups were selected. At 2 months of age, rats were euthanized by decapitation, and their mesovaria were collected. Tissues were fixed in 4% formalin, embedded in paraffin, and sectioned into 5μm thick slices. The sections were stained with H&E and digitized by digital microscope. The area of brown fat and inflammatory infiltrations were quantified using Image J software. The blood cortisol levels were measured using ELISA. Data are expressed as the mean ± standard error of the mean (SEM). The Mann-Whitney test was used to compare the two groups. All analyses were performed using Prism (GraphPad Software). A p-value of < 0.05 was considered statistically significant. Result: Offspring born from stressed mothers exhibited significant physiological differences compared to the control group. Specifically, the body weight of offspring from stressed mothers was significantly lower than the control group (p=0.0002). Conversely, the cortisol level in offspring from stressed mothers was significantly higher (p=0.0446). Offspring born from stressed mothers showed a statistically significant increase in brown fat area compared to the control group (p=0.01). Additionally, offspring from stressed mothers had a significantly higher number of inflammatory infiltrates in their mesovarium compared to the control group (p<0.047). These results indicate the profound impact of maternal stress on offspring physiology, affecting body weight, stress hormone levels, metabolic characteristics, and inflammatory responses. Conclusion: Exposure to cold stress during pregnancy has significant repercussions on offspring physiology. Our findings demonstrate that cold stress exposure leads to increased blood cortisol levels, brown fat accumulation, and inflammatory cell infiltration in offspring. These results underscore the profound impact of maternal stress on offspring health and highlight the importance of mitigating environmental stressors during pregnancy to promote optimal offspring outcomes.Keywords: brown fat, cold stress during pregnancy, inflammation, mesovarium
Procedia PDF Downloads 4590 Innovation Eco-Systems and Cities: Sustainable Innovation and Urban Form
Authors: Claudia Trillo
Abstract:
Regional innovation eco-ecosystems are composed of a variety of interconnected urban innovation eco-systems, mutually reinforcing each other and making the whole territorial system successful. Combining principles drawn from the new economic growth theory and from the socio-constructivist approach to the economic growth, with the new geography of innovation emerging from the networked nature of innovation districts, this paper explores the spatial configuration of urban innovation districts, with the aim of unveiling replicable spatial patterns and transferable portfolios of urban policies. While some authors suggest that cities should be considered ideal natural clusters, supporting cross-fertilization and innovation thanks to the physical setting they provide to the construction of collective knowledge, still a considerable distance persists between regional development strategies and urban policies. Moreover, while public and private policies supporting entrepreneurship normally consider innovation as the cornerstone of any action aimed at uplifting the competitiveness and economic success of a certain area, a growing body of literature suggests that innovation is non-neutral, hence, it should be constantly assessed against equity and social inclusion. This paper draws from a robust qualitative empirical dataset gathered through 4-years research conducted in Boston to provide readers with an evidence-based set of recommendations drawn from the lessons learned through the investigation of the chosen innovation districts in the Boston area. The evaluative framework used for assessing the overall performance of the chosen case studies stems from the Habitat III Sustainable Development Goals rationale. The concept of inclusive growth has been considered essential to assess the social innovation domain in each of the chosen cases. The key success factors for the development of the Boston innovation ecosystem can be generalized as follows: 1) a quadruple helix model embedded in the physical structure of the two cities (Boston and Cambridge), in which anchor Higher Education (HE) institutions continuously nurture the Entrepreneurial Environment. 2) an entrepreneurial approach emerging from the local governments, eliciting risk-taking and bottom-up civic participation in tackling key issues in the city. 3) a networking structure of some intermediary actors supporting entrepreneurial collaboration, cross-fertilization and co-creation, which collaborate at multiple-scales thus enabling positive spillovers from the stronger to the weaker contexts. 4) awareness of the socio-economic value of the built environment as enabler of cognitive networks allowing activation of the collective intelligence. 5) creation of civic-led spaces enabling grassroot collaboration and cooperation. Evidence shows that there is not a single magic recipe for the successful implementation of place-based and social innovation-driven strategies. On the contrary, the variety of place-grounded combinations of micro and macro initiatives, embedded in the social and spatial fine grain of places and encompassing a diversity of actors, can create the conditions enabling places to thrive and local economic activities to grow in a sustainable way.Keywords: innovation-driven sustainable Eco-systems , place-based sustainable urban development, sustainable innovation districts, social innovation, urban policie
Procedia PDF Downloads 10489 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 11988 Implementation of a Web-Based Clinical Outcomes Monitoring and Reporting Platform across the Fortis Network
Authors: Narottam Puri, Bishnu Panigrahi, Narayan Pendse
Abstract:
Background: Clinical Outcomes are the globally agreed upon, evidence-based measurable changes in health or quality of life resulting from the patient care. Reporting of outcomes and its continuous monitoring provides an opportunity for both assessing and improving the quality of patient care. In 2012, International Consortium Of HealthCare Outcome Measurement (ICHOM) was founded which has defined global Standard Sets for measuring the outcome of various treatments. Method: Monitoring of Clinical Outcomes was identified as a pillar of Fortis’ core value of Patient Centricity. The project was started as an in-house developed Clinical Outcomes Reporting Portal by the Fortis Medical IT team. Standard sets of Outcome measurement developed by ICHOM were used. A pilot was run at Fortis Escorts Heart Institute from Aug’13 – Dec’13.Starting Jan’14, it was implemented across 11 hospitals of the group. The scope was hospital-wide and major clinical specialties: Cardiac Sciences, Orthopedics & Joint Replacement were covered. The internally developed portal had its limitations of report generation and also capturing of Patient related outcomes was restricted. A year later, the company provisioned for an ICHOM Certified Software product which could provide a platform for data capturing and reporting to ensure compliance with all ICHOM requirements. Post a year of the launch of the software; Fortis Healthcare has become the 1st Healthcare Provider in Asia to publish Clinical Outcomes data for the Coronary Artery Disease Standard Set comprising of Coronary Artery Bypass Graft and Percutaneous Coronary Interventions) in the public domain. (Jan 2016). Results: This project has helped in firmly establishing a culture of monitoring and reporting Clinical Outcomes across Fortis Hospitals. Given the diverse nature of the healthcare delivery model at Fortis Network, which comprises of hospitals of varying size and specialty-mix and practically covering the entire span of the country, standardization of data collection and reporting methodology is a huge achievement in itself. 95% case reporting was achieved with more than 90% data completion at the end of Phase 1 (March 2016). Post implementation the group now has one year of data from its own hospitals. This has helped identify the gaps and plan towards ways to bridge them and also establish internal benchmarks for continual improvement. Besides the value created for the group includes: 1. Entire Fortis community has been sensitized on the importance of Clinical Outcomes monitoring for patient centric care. Initial skepticism and cynicism has been countered by effective stakeholder engagement and automation of processes. 2. Measuring quality is the first step in improving quality. Data analysis has helped compare clinical results with best-in-class hospitals and identify improvement opportunities. 3. Clinical fraternity is extremely pleased to be part of this initiative and has taken ownership of the project. Conclusion: Fortis Healthcare is the pioneer in the monitoring of Clinical Outcomes. Implementation of ICHOM standards has helped Fortis Clinical Excellence Program in improving patient engagement and strengthening its commitment to its core value of Patient Centricity. Validation and certification of the Clinical Outcomes data by an ICHOM Certified Supplier adds confidence to its claim of being leaders in this space.Keywords: clinical outcomes, healthcare delivery, patient centricity, ICHOM
Procedia PDF Downloads 23687 Early Predictive Signs for Kasai Procedure Success
Authors: Medan Isaeva, Anna Degtyareva
Abstract:
Context: Biliary atresia is a common reason for liver transplants in children, and the Kasai procedure can potentially be successful in avoiding the need for transplantation. However, it is important to identify factors that influence surgical outcomes in order to optimize treatment and improve patient outcomes. Research aim: The aim of this study was to develop prognostic models to assess the outcomes of the Kasai procedure in children with biliary atresia. Methodology: This retrospective study analyzed data from 166 children with biliary atresia who underwent the Kasai procedure between 2002 and 2021. The effectiveness of the operation was assessed based on specific criteria, including post-operative stool color, jaundice reduction, and bilirubin levels. The study involved a comparative analysis of various parameters, such as gestational age, birth weight, age at operation, physical development, liver and spleen sizes, and laboratory values including bilirubin, ALT, AST, and others, measured pre- and post-operation. Ultrasonographic evaluations were also conducted pre-operation, assessing the hepatobiliary system and related quantitative parameters. The study was carried out by two experienced specialists in pediatric hepatology. Comparative analysis and multifactorial logistic regression were used as the primary statistical methods. Findings: The study identified several statistically significant predictors of a successful Kasai procedure, including the presence of the gallbladder and levels of cholesterol and direct bilirubin post-operation. A detectable gallbladder was associated with a higher probability of surgical success, while elevated post-operative cholesterol and direct bilirubin levels were indicative of a reduced chance of positive outcomes. Theoretical importance: The findings of this study contribute to the optimization of treatment strategies for children with biliary atresia undergoing the Kasai procedure. By identifying early predictive signs of success, clinicians can modify treatment plans and manage patient care more effectively and proactively. Data collection and analysis procedures: Data for this analysis were obtained from the health records of patients who received the Kasai procedure. Comparative analysis and multifactorial logistic regression were employed to analyze the data and identify significant predictors. Question addressed: The study addressed the question of identifying predictive factors for the success of the Kasai procedure in children with biliary atresia. Conclusion: The developed prognostic models serve as valuable tools for early detection of patients who are less likely to benefit from the Kasai procedure. This enables clinicians to modify treatment plans and manage patient care more effectively and proactively. Potential limitations of the study: The study has several limitations. Its retrospective nature may introduce biases and inconsistencies in data collection. Being single centered, the results might not be generalizable to wider populations due to variations in surgical and postoperative practices. Also, other potential influencing factors beyond the clinical, laboratory, and ultrasonographic parameters considered in this study were not explored, which could affect the outcomes of the Kasai operation. Future studies could benefit from including a broader range of factors.Keywords: biliary atresia, kasai operation, prognostic model, native liver survival
Procedia PDF Downloads 5486 Analysis of Short Counter-Flow Heat Exchanger (SCFHE) Using Non-Circular Micro-Tubes Operated on Water-CuO Nanofluid
Authors: Avdhesh K. Sharma
Abstract:
Key, in the development of energy-efficient micro-scale heat exchanger devices, is to select large heat transfer surface to volume ratio without much expanse on re-circulated pumps. The increased interest in short heat exchanger (SHE) is due to accessibility of advanced technologies for manufacturing of micro-tubes in range of 1 micron m - 1 mm. Such SHE using micro-tubes are highly effective for high flux heat transfer technologies. Nanofluids, are used to enhance the thermal conductivity of re-circulated coolant and thus enhances heat transfer rate further. Higher viscosity associated with nanofluid expands more pumping power. Thus, there is a trade-off between heat transfer rate and pressure drop with geometry of micro-tubes. Herein, a novel design of short counter flow heat exchanger (SCFHE) using non-circular micro-tubes flooded with CuO-water nanofluid is conceptualized by varying the ratio of surface area to cross-sectional area of micro-tubes. A framework for comparative analysis of SCFHE using micro-tubes non-circular shape flooded by CuO-water nanofluid is presented. In SCFHE concept, micro-tubes having various geometrical shapes (viz., triangular, rectangular and trapezoidal) has been arranged row-wise to facilitate two aspects: (1) allowing easy flow distribution for cold and hot stream, and (2) maximizing the thermal interactions with neighboring channels. Adequate distribution of rows for cold and hot flow streams enables above two aspects. For comparative analysis, a specific volume or cross-section area is assigned to each elemental cell (which includes flow area and area corresponds to half wall thickness). A specific volume or cross-section area is assumed to be constant for each elemental cell (which includes flow area and half wall thickness area) and variation in surface area is allowed by selecting different geometry of micro-tubes in SCFHE. Effective thermal conductivity model for CuO-water nanofluid has been adopted, while the viscosity values for water based nanofluids are obtained empirically. Correlations for Nusselt number (Nu) and Poiseuille number (Po) for micro-tubes have been derived or adopted. Entrance effect is accounted for. Thermal and hydrodynamic performances of SCFHE are defined in terms of effectiveness and pressure drop or pumping power, respectively. For defining the overall performance index of SCFHE, two links are employed. First one relates heat transfer between the fluid streams q and pumping power PP as (=qj/PPj); while another link relates effectiveness eff and pressure drop dP as (=effj/dPj). For analysis, the inlet temperatures of hot and cold streams are varied in usual range of 20dC-65dC. Fully turbulent regime is seldom encountered in micro-tubes and transition of flow regime occurs much early (i.e., ~Re=1000). Thus, Re is fixed at 900, however, the uncertainty in Re due to addition of nanoparticles in base fluid is quantified by averaging of Re. Moreover, for minimizing error, volumetric concentration is limited to range 0% to ≤4% only. Such framework may be helpful in utilizing maximum peripheral surface area of SCFHE without any serious severity on pumping power and towards developing advanced short heat exchangers.Keywords: CuO-water nanofluid, non-circular micro-tubes, performance index, short counter flow heat exchanger
Procedia PDF Downloads 21385 Climate Safe House: A Community Housing Project Tackling Catastrophic Sea Level Rise in Coastal Communities
Authors: Chris Fersterer, Col Fay, Tobias Danielmeier, Kat Achterberg, Scott Willis
Abstract:
New Zealand, an island nation, has an extensive coastline peppered with small communities of iconic buildings known as Bachs. Post WWII, these modest buildings were constructed by their owners as retreats and generally were small, low cost, often using recycled material and often they fell below current acceptable building standards. In the latter part of the 20th century, real estate prices in many of these communities remained low and these areas became permanent residences for people attracted to this affordable lifestyle choice. The Blueskin Resilient Communities Trust (BRCT) is an organisation that recognises the vulnerability of communities in low lying settlements as now being prone to increased flood threat brought about by climate change and sea level rise. Some of the inhabitants of Blueskin Bay, Otago, NZ have already found their properties to be un-insurable because of increased frequency of flood events and property values have slumped accordingly. Territorial authorities also acknowledge this increased risk and have created additional compliance measures for new buildings that are less than 2 m above tidal peaks. Community resilience becomes an additional concern where inhabitants are attracted to a lifestyle associated with a specific location and its people when this lifestyle is unable to be met in a suburban or city context. Traditional models of social housing fail to provide the sense of community connectedness and identity enjoyed by the current residents of Blueskin Bay. BRCT have partnered with the Otago Polytechnic Design School to design a new form of community housing that can react to this environmental change. It is a longitudinal project incorporating participatory approaches as a means of getting people ‘on board’, to understand complex systems and co-develop solutions. In the first period, they are seeking industry support and funding to develop a transportable and fully self-contained housing model that exploits current technologies. BRCT also hope that the building will become an educational tool to highlight climate change issues facing us today. This paper uses the Climate Safe House (CSH) as a case study for education in architectural sustainability through experiential learning offered as part of the Otago Polytechnics Bachelor of Design. Students engage with the project with research methodologies, including site surveys, resident interviews, data sourced from government agencies and physical modelling. The process involves collaboration across design disciplines including product and interior design but also includes connections with industry, both within the education institution and stakeholder industries introduced through BRCT. This project offers a rich learning environment where students become engaged through project based learning within a community of practice, including architecture, construction, energy and other related fields. The design outcomes are expressed in a series of public exhibitions and forums where community input is sought in a truly participatory process.Keywords: community resilience, problem based learning, project based learning, case study
Procedia PDF Downloads 28884 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 4983 Multi-Criteria Assessment of Biogas Feedstock
Authors: Rawan Hakawati, Beatrice Smyth, David Rooney, Geoffrey McCullough
Abstract:
Targets have been set in the EU to increase the share of renewable energy consumption to 20% by 2020, but developments have not occurred evenly across the member states. Northern Ireland is almost 90% dependent on imported fossil fuels. With such high energy dependency, Northern Ireland is particularly susceptible to the security of supply issues. Linked to fossil fuels are greenhouse gas emissions, and the EU plans to reduce emissions by 20% by 2020. The use of indigenously produced biomass could reduce both greenhouse gas emissions and external energy dependence. With a wide range of both crop and waste feedstock potentially available in Northern Ireland, anaerobic digestion has been put forward as a possible solution for renewable energy production, waste management, and greenhouse gas reduction. Not all feedstock, however, is the same, and an understanding of feedstock suitability is important for both plant operators and policy makers. The aim of this paper is to investigate biomass suitability for anaerobic digestion in Northern Ireland. It is also important that decisions are based on solid scientific evidence. For this reason, the methodology used is multi-criteria decision matrix analysis which takes multiple criteria into account simultaneously and ranks alternatives accordingly. The model uses the weighted sum method (which follows the Entropy Method to measure uncertainty using probability theory) to decide on weights. The Topsis method is utilized to carry out the mathematical analysis to provide the final scores. Feedstock that is currently available in Northern Ireland was classified into two categories: wastes (manure, sewage sludge and food waste) and energy crops, specifically grass silage. To select the most suitable feedstock, methane yield, feedstock availability, feedstock production cost, biogas production, calorific value, produced kilowatt-hours, dry matter content, and carbon to nitrogen ratio were assessed. The highest weight (0.249) corresponded to production cost reflecting a variation of £41 gate fee to 22£/tonne cost. The weights calculated found that grass silage was the most suitable feedstock. A sensitivity analysis was then conducted to investigate the impact of weights. The analysis used the Pugh Matrix Method which relies upon The Analytical Hierarchy Process and pairwise comparisons to determine a weighting for each criterion. The results showed that the highest weight (0.193) corresponded to biogas production indicating that grass silage and manure are the most suitable feedstock. Introducing co-digestion of two or more substrates can boost the biogas yield due to a synergistic effect induced by the feedstock to favor positive biological interactions. A further benefit of co-digesting manure is that the anaerobic digestion process also acts as a waste management strategy. From the research, it was concluded that energy from agricultural biomass is highly advantageous in Northern Ireland because it would increase the country's production of renewable energy, manage waste production, and would limit the production of greenhouse gases (current contribution from agriculture sector is 26%). Decision-making methods based on scientific evidence aid policy makers in classifying multiple criteria in a logical mathematical manner in order to reach a resolution.Keywords: anaerobic digestion, biomass as feedstock, decision matrix, renewable energy
Procedia PDF Downloads 46282 Secure Texting Used in a Post-Acute Pediatric Skilled Nursing Inpatient Setting: A Multidisciplinary Care Team Driven Communication System with Alarm and Alert Notification Management
Authors: Bency Ann Massinello, Nancy Day, Janet Fellini
Abstract:
Background: The use of an appropriate mode of communication among the multidisciplinary care team members regarding coordination of care is an extremely complicated yet important patient safety initiative. Effective communication among the team members(nursing staff, medical staff, respiratory therapists, rehabilitation therapists, patient-family services team…) become essential to develop a culture of trust and collaboration to deliver the highest quality care to patients are their families. The inpatient post-acute pediatrics, where children and their caregivers come for continuity of care, is no exceptions to the increasing use of text messages as a means to communication among clinicians. One such platform is the Vocera Communications (Vocera Smart Mobile App called Vocera Edge) allows the teams to use the application and share sensitive patient information through an encrypted platform using IOS company provided shared and assigned mobile devices. Objective: This paper discusses the quality initiative of implementing the transition from Vocera Smartbage to Vocera Edge Mobile App, technology advantage, use case expansion, and lessons learned about a secure alternative modality that allows sending and receiving secure text messages in a pediatric post-acute setting using an IOS device. This implementation process included all direct care staff, ancillary teams, and administrative teams on the clinical units. Methods: Our institution launched this transition from voice prompted hands-free Vocera Smartbage to Vocera Edge mobile based app for secure care team texting using a big bang approach during the first PDSA cycle. The pre and post implementation data was gathered using a qualitative survey of about 500 multidisciplinary team members to determine the ease of use of the application and its efficiency in care coordination. The technology was further expanded in its use by implementing clinical alerts and alarms notification using middleware integration with patient monitoring (Masimo) and life safety (Nurse call) systems. Additional use of the smart mobile iPhone use include pushing out apps like Lexicomp and Up to Date to have it readily available for users for evident-based practice in medication and disease management. Results: Successful implementation of the communication system in a shared and assigned model with all of the multidisciplinary teams in our pediatric post-acute setting. In just a 3-monthperiod post implementation, we noticed a 14% increase from 7,993 messages in 6 days in December 2020 to 9,116messages in March 2021. This confirmed that all clinical and non-clinical teams were using this mode of communication for coordinating the care for their patients. System generated data analytics used in addition to the pre and post implementation staff survey for process evaluation. Conclusion: A secure texting option using a mobile device is a safe and efficient mode for care team communication and collaboration using technology in real time. This allows for the settings like post-acute pediatric care areas to be in line with the widespread use of mobile apps and technology in our mainstream healthcare.Keywords: nursing informatics, mobile secure texting, multidisciplinary communication, pediatrics post acute care
Procedia PDF Downloads 19681 The Effects of the Interaction between Prenatal Stress and Diet on Maternal Insulin Resistance and Inflammatory Profile
Authors: Karen L. Lindsay, Sonja Entringer, Claudia Buss, Pathik D. Wadhwa
Abstract:
Maternal nutrition and stress are independently recognized as among the most important factors that influence prenatal biology, with implications for fetal development and poor pregnancy outcomes. While there is substantial evidence from non-pregnancy human and animal studies that a complex, bi-directional relationship exists between nutrition and stress, to the author’s best knowledge, their interaction in the context of pregnancy has been significantly understudied. The aim of this study is to assess the interaction between maternal psychological stress and diet quality across pregnancy and its effects on biomarkers of prenatal insulin resistance and inflammation. This is a prospective longitudinal study of N=235 women carrying a healthy, singleton pregnancy, recruited from prenatal clinics of the University of California, Irvine Medical Center. Participants completed a 4-day ambulatory assessment in early, middle and late pregnancy, which included multiple daily electronic diary entries using Ecological Momentary Assessment (EMA) technology on a dedicated study smartphone. The EMA diaries gathered moment-level data on maternal perceived stress, negative mood, positive mood and quality of social interactions. The numerical scores for these variables were averaged across each study time-point and converted to Z-scores. A single composite variable for 'STRESS' was computed as follows: (Negative mood+Perceived stress)–(Positive mood+Social interaction quality). Dietary intakes were assessed by three 24-hour dietary recalls conducted within two weeks of each 4-day assessment. Daily nutrient and food group intakes were averaged across each study time-point. The Alternative Healthy Eating Index adapted for pregnancy (AHEI-P) was computed for early, middle and late pregnancy as a validated summary measure of diet quality. At the end of each 4-day ambulatory assessment, women provided a fasting blood sample, which was assayed for levels of glucose, insulin, Interleukin (IL)-6 and Tumor Necrosis Factor (TNF)-α. Homeostasis Model Assessment of Insulin Resistance (HOMA-IR) was computed. Pearson’s correlation was used to explore the relationship between maternal STRESS and AHEI-P within and between each study time-point. Linear regression was employed to test the association of the stress-diet interaction (STRESS*AHEI-P) with the biological markers HOMA-IR, IL-6 and TNF-α at each study time-point, adjusting for key covariates (pre-pregnancy body mass index, maternal education level, race/ethnicity). Maternal STRESS and AHEI-P were significantly inversely correlated in early (r=-0.164, p=0.018) and mid-pregnancy (-0.160, p=0.019), and AHEI-P from earlier gestational time-points correlated with later STRESS (early AHEI-P x mid STRESS: r=-0.168, p=0.017; mid AHEI-P x late STRESS: r=-0.142, p=0.041). In regression models, the interaction term was not associated with HOMA-IR or IL-6 at any gestational time-point. The stress-diet interaction term was significantly associated with TNF-α according to the following patterns: early AHEI-P*early STRESS vs early TNF-α (p=0.005); early AHEI-P*early STRESS vs mid TNF-α (p=0.002); early AHEI-P*mid STRESS vs mid TNF-α (p=0.005); mid AHEI-P*mid STRESS vs mid TNF-α (p=0.070); mid AHEI-P*late STRESS vs late TNF-α (p=0.011). Poor diet quality is significantly related to higher psychosocial stress levels in pregnant women across gestation, which may promote inflammation via TNF-α. Future prenatal studies should consider the combined effects of maternal stress and diet when evaluating either one of these factors on pregnancy or infant outcomes.Keywords: diet quality, inflammation, insulin resistance, nutrition, pregnancy, stress, tumor necrosis factor-alpha
Procedia PDF Downloads 20080 Theoretical Modelling of Molecular Mechanisms in Stimuli-Responsive Polymers
Authors: Catherine Vasnetsov, Victor Vasnetsov
Abstract:
Context: Thermo-responsive polymers are materials that undergo significant changes in their physical properties in response to temperature changes. These polymers have gained significant attention in research due to their potential applications in various industries and medicine. However, the molecular mechanisms underlying their behavior are not well understood, particularly in relation to cosolvency, which is crucial for practical applications. Research Aim: This study aimed to theoretically investigate the phenomenon of cosolvency in long-chain polymers using the Flory-Huggins statistical-mechanical framework. The main objective was to understand the interactions between the polymer, solvent, and cosolvent under different conditions. Methodology: The research employed a combination of Monte Carlo computer simulations and advanced machine-learning methods. The Flory-Huggins mean field theory was used as the basis for the simulations. Spinodal graphs and ternary plots were utilized to develop an initial computer model for predicting polymer behavior. Molecular dynamic simulations were conducted to mimic real-life polymer systems. Machine learning techniques were incorporated to enhance the accuracy and reliability of the simulations. Findings: The simulations revealed that the addition of very low or very high volumes of cosolvent molecules resulted in smaller radii of gyration for the polymer, indicating poor miscibility. However, intermediate volume fractions of cosolvent led to higher radii of gyration, suggesting improved miscibility. These findings provide a possible microscopic explanation for the cosolvency phenomenon in polymer systems. Theoretical Importance: This research contributes to a better understanding of the behavior of thermo-responsive polymers and the role of cosolvency. The findings provide insights into the molecular mechanisms underlying cosolvency and offer specific predictions for future experimental investigations. The study also presents a more rigorous analysis of the Flory-Huggins free energy theory in the context of polymer systems. Data Collection and Analysis Procedures: The data for this study was collected through Monte Carlo computer simulations and molecular dynamic simulations. The interactions between the polymer, solvent, and cosolvent were analyzed using the Flory-Huggins mean field theory. Machine learning techniques were employed to enhance the accuracy of the simulations. The collected data was then analyzed to determine the impact of cosolvent volume fractions on the radii of gyration of the polymer. Question Addressed: The research addressed the question of how cosolvency affects the behavior of long-chain polymers. Specifically, the study aimed to investigate the interactions between the polymer, solvent, and cosolvent under different volume fractions and understand the resulting changes in the radii of gyration. Conclusion: In conclusion, this study utilized theoretical modeling and computer simulations to investigate the phenomenon of cosolvency in long-chain polymers. The findings suggest that moderate cosolvent volume fractions can lead to improved miscibility, as indicated by higher radii of gyration. These insights contribute to a better understanding of the molecular mechanisms underlying cosolvency in polymer systems and provide predictions for future experimental studies. The research also enhances the theoretical analysis of the Flory-Huggins free energy theory.Keywords: molecular modelling, flory-huggins, cosolvency, stimuli-responsive polymers
Procedia PDF Downloads 7079 Stromal Vascular Fraction Regenerative Potential in a Muscle Ischemia/Reperfusion Injury Mouse Model
Authors: Anita Conti, Riccardo Ossanna, Lindsey A. Quintero, Giamaica Conti, Andrea Sbarbati
Abstract:
Ischemia/reperfusion (IR) injury induces muscle fiber atrophy and skeletal muscle fiber death with subsequently functionality loss. The heterogeneous pool of cells, especially mesenchymal stem cells, contained in the stromal vascular fraction (SVF) of adipose tissue could promote muscle fiber regeneration. To prevent SVF dispersion, it has been proposed the use of injectable biopolymers that work as cells carrier. A significant element of the extracellular matrix is hyaluronic acid (HA), which has been widely used in regenerative medicine as a cell scaffold given its biocompatibility, degradability, and the possibility of chemical functionalization. Connective tissue micro-fragments enriched with SVF obtained from mechanical disaggregation of adipose tissue were evaluated for IR muscle injury regeneration using low molecular weight HA as a scaffold. IR induction. Hindlimb ischemia was induced in 9 athymic nude mice through the clamping of the right quadriceps using a plastic band. Reperfusion was induced by cutting the plastic band after 3 hours of ischemic period. Contralateral (left) muscular tissue was used as healthy control. Treatment. Twenty-four hours after the IR induction, animals (n=3) were intramuscularly injected with 100 µl of SVF mixed with HA (SVF-HA). Animals treated with 100 µl of HA (n=3) and 100 µl saline solution (n=3) were used as control. Treatment monitoring. All animals were in vivo monitored by magnetic resonance imaging (MRI) at 5, 7, 14 and 18 days post-injury (dpi). High-resolution morphological T2 weighed, quantitative T2 map and Dynamic Contrast-Enhanced (DCE) images were acquired in order to assess the regenerative potential of SVF-HA treatment. Ex vivo evaluation. After 18 days from IR induction, animals were sacrificed, and the muscles were harvested for histological examination. At 5 dpi T2 high-resolution MR images clearly reveal the presence of an extensive edematous area due to IR damage for all groups identifiable as an increase of signal intensity (SI) of muscular and surrounding tissue. At 7 dpi, animals of the SVF-HA group showed a reduction of SI, and the T2relaxation time of muscle tissue of the HA-SVF group was 29±0.5ms, comparable with the T2relaxation time of contralateral muscular tissue (30±0.7ms). These suggest a reduction of edematous overflow and swelling. The T2relaxation time at 7dpi of HA and saline groups were 84±2ms and 90±5ms, respectively, which remained elevated during the rest of the study. The evaluation of vascular regeneration showed similar results. Indeed, DCE-MRI analysis revealed a complete recovery of muscular tissue perfusion after 14 dpi for the SVF-HA group, while for the saline and HA group, controls remained in a damaged state. Finally, the histological examination of SVF-HA treated animals exhibited well-defined and organized fibers morphology with a lateralized nucleus, similar to contralateral healthy muscular tissue. On the contrary, HA and saline-treated animals presented inflammatory infiltrates, with HA slightly improving the diameter of the fibers and less degenerated tissue. Our findings show that connective tissue micro-fragments enriched with SVF induce higher muscle homeostasis and perfusion restoration in contrast to control groups.Keywords: ischemia/reperfusion injury, regenerative medicine, resonance imaging, stromal vascular fraction
Procedia PDF Downloads 12778 Future Research on the Resilience of Tehran’s Urban Areas Against Pandemic Crises Horizon 2050
Authors: Farzaneh Sasanpour, Saeed Amini Varaki
Abstract:
Resilience is an important goal for cities as urban areas face an increasing range of challenges in the 21st century; therefore, according to the characteristics of risks, adopting an approach that responds to sensitive conditions in the risk management process is the resilience of cities. In the meantime, most of the resilience assessments have dealt with natural hazards and less attention has been paid to pandemics.In the covid-19 pandemic, the country of Iran and especially the metropolis of Tehran, was not immune from the crisis caused by its effects and consequences and faced many challenges. One of the methods that can increase the resilience of Tehran's metropolis against possible crises in the future is future studies. This research is practical in terms of type. The general pattern of the research will be descriptive-analytical and from the point of view that it is trying to communicate between the components and provide urban resilience indicators with pandemic crises and explain the scenarios, its future studies method is exploratory. In order to extract and determine the key factors and driving forces effective on the resilience of Tehran's urban areas against pandemic crises (Covid-19), the method of structural analysis of mutual effects and Micmac software was used. Therefore, the primary factors and variables affecting the resilience of Tehran's urban areas were set in 5 main factors, including physical-infrastructural (transportation, spatial and physical organization, streets and roads, multi-purpose development) with 39 variables based on mutual effects analysis. Finally, key factors and variables in five main areas, including managerial-institutional with five variables; Technology (intelligence) with 3 variables; economic with 2 variables; socio-cultural with 3 variables; and physical infrastructure, were categorized with 7 variables. These factors and variables have been used as key factors and effective driving forces on the resilience of Tehran's urban areas against pandemic crises (Covid-19), in explaining and developing scenarios. In order to develop the scenarios for the resilience of Tehran's urban areas against pandemic crises (Covid-19), intuitive logic, scenario planning as one of the future research methods and the Global Business Network (GBN) model were used. Finally, four scenarios have been drawn and selected with a creative method using the metaphor of weather conditions, which is indicative of the general outline of the conditions of the metropolis of Tehran in that situation. Therefore, the scenarios of Tehran metropolis were obtained in the form of four scenarios: 1- solar scenario (optimal governance and management leading in smart technology) 2- cloud scenario (optimal governance and management following in intelligent technology) 3- dark scenario (optimal governance and management Unfavorable leader in intelligence technology) 4- Storm scenario (unfavorable governance and management of follower in intelligence technology). The solar scenario shows the best situation and the stormy scenario shows the worst situation for the Tehran metropolis. According to the findings obtained in this research, city managers can, in order to achieve a better tomorrow for the metropolis of Tehran, in all the factors and components of urban resilience against pandemic crises by using future research methods, a coherent picture with the long-term horizon of 2050, from the path Provide urban resilience movement and platforms for upgrading and increasing the capacity to deal with the crisis. To create the necessary platforms for the realization, development and evolution of the urban areas of Tehran in a way that guarantees long-term balance and stability in all dimensions and levels.Keywords: future research, resilience, crisis, pandemic, covid-19, Tehran
Procedia PDF Downloads 67