Search results for: risk factors funding
22 PsyVBot: Chatbot for Accurate Depression Diagnosis using Long Short-Term Memory and NLP
Authors: Thaveesha Dheerasekera, Dileeka Sandamali Alwis
Abstract:
The escalating prevalence of mental health issues, such as depression and suicidal ideation, is a matter of significant global concern. It is plausible that a variety of factors, such as life events, social isolation, and preexisting physiological or psychological health conditions, could instigate or exacerbate these conditions. Traditional approaches to diagnosing depression entail a considerable amount of time and necessitate the involvement of adept practitioners. This underscores the necessity for automated systems capable of promptly detecting and diagnosing symptoms of depression. The PsyVBot system employs sophisticated natural language processing and machine learning methodologies, including the use of the NLTK toolkit for dataset preprocessing and the utilization of a Long Short-Term Memory (LSTM) model. The PsyVBot exhibits a remarkable ability to diagnose depression with a 94% accuracy rate through the analysis of user input. Consequently, this resource proves to be efficacious for individuals, particularly those enrolled in academic institutions, who may encounter challenges pertaining to their psychological well-being. The PsyVBot employs a Long Short-Term Memory (LSTM) model that comprises a total of three layers, namely an embedding layer, an LSTM layer, and a dense layer. The stratification of these layers facilitates a precise examination of linguistic patterns that are associated with the condition of depression. The PsyVBot has the capability to accurately assess an individual's level of depression through the identification of linguistic and contextual cues. The task is achieved via a rigorous training regimen, which is executed by utilizing a dataset comprising information sourced from the subreddit r/SuicideWatch. The diverse data present in the dataset ensures precise and delicate identification of symptoms linked with depression, thereby guaranteeing accuracy. PsyVBot not only possesses diagnostic capabilities but also enhances the user experience through the utilization of audio outputs. This feature enables users to engage in more captivating and interactive interactions. The PsyVBot platform offers individuals the opportunity to conveniently diagnose mental health challenges through a confidential and user-friendly interface. Regarding the advancement of PsyVBot, maintaining user confidentiality and upholding ethical principles are of paramount significance. It is imperative to note that diligent efforts are undertaken to adhere to ethical standards, thereby safeguarding the confidentiality of user information and ensuring its security. Moreover, the chatbot fosters a conducive atmosphere that is supportive and compassionate, thereby promoting psychological welfare. In brief, PsyVBot is an automated conversational agent that utilizes an LSTM model to assess the level of depression in accordance with the input provided by the user. The demonstrated accuracy rate of 94% serves as a promising indication of the potential efficacy of employing natural language processing and machine learning techniques in tackling challenges associated with mental health. The reliability of PsyVBot is further improved by the fact that it makes use of the Reddit dataset and incorporates Natural Language Toolkit (NLTK) for preprocessing. PsyVBot represents a pioneering and user-centric solution that furnishes an easily accessible and confidential medium for seeking assistance. The present platform is offered as a modality to tackle the pervasive issue of depression and the contemplation of suicide.Keywords: chatbot, depression diagnosis, LSTM model, natural language process
Procedia PDF Downloads 6921 Reimagining Kinships: Queering the Labor of Care and Motherhood in Japan’s Rental Family Services
Authors: Maari Sugawara
Abstract:
This study investigates the constructed notion of “motherhood” and queered forms of care in contemporary Japan, focusing on rental family services. In Japan, the concept of motherhood is often equated with womanhood, reflecting a pervasive ideology that views motherhood as an essential aspect of a woman's societal role, particularly amidst economic recovery and an aging population. This study interrogates these gendered expectations by linking rental family services, particularly the role of rental mothers, to traditional caregiving roles. It critiques the gendered construction of domestic labor and aims to expand conceptions of alternative family structures and caregiving roles beyond normative frameworks. Emerging in the 1980s to provide companionship for the elderly, rental family services have evolved to meet diverse social needs, with paid actors fulfilling familial roles at various social events. Despite their growing prevalence, academic exploration of this phenomenon remains limited. This research aims to fill that gap by investigating the cultural, social, and economic factors fueling the popularity of rental family services and analyzing their implications for contemporary understandings of family dynamics and care labor in Japan. Furthermore, this study underscores the disproportionate domestic labor burden women in Japan bear, often managing time-intensive household tasks, which creates a "double burden" for those in full-time employment. Care work, including elderly and disability support, is undervalued and typically compensated at near-minimum wage levels, with women predominantly filling these low-wage roles. This gender disparity in Japan's care industry contributes to labor shortages in caregiving and childcare, highlighting broader structural inequities in the labor market. Through semi-structured qualitative interviews with fifteen rental mothers, this study investigates their experiences, motivations, role dynamics, and emotional labor. It critically examines whether the labor performed by rental family actors constitutes a subversive practice deserving of appropriate compensation. Utilizing a role-playing method, the author engages with rental mothers as if they were her own, reflecting the dynamics of compensated labor. This interaction delves into the economic and emotional aspects of constructed motherhood, facilitating a broader inquiry into the value of both productive and reproductive labor in Japan. The study also investigates the relationship between sex work and rental family services within the socio-economic landscape, recognizing the links between the welfare sector and female employment in legal sex work. Although distinct, these sectors merit joint consideration due to the commonality of male clients in both industries. This research engages with theoretical perspectives framing mobile sex work as inherently queer, directly challenging the dominance of heteronormativity. The agency exercised by sex workers complicates narratives of conformity and deviance, underscoring the need to reevaluate caregiving labor in both paid and unpaid contexts. Ultimately, this research critiques the intersection of gender, care, and labor in contemporary Japan by examining the undervaluation of traditional caregiving roles alongside the labor involved in rental family services. It challenges Japanese policies that equate womanhood with motherhood and explores the potential of viewing outsourced care as queered maternal and non-reproductive labor, advocating for the recognition of alternative family structures and non-reproductive forms of motherhood.Keywords: motherhood, alternative family structures, carework, Japan, queer studies
Procedia PDF Downloads 1420 Language Anxiety and Learner Achievement among University Undergraduates in Sri Lanka: A Case Study of University of Sri Jayewardenepura
Authors: Sujeeva Sebastian Pereira
Abstract:
Language Anxiety (LA) – a distinct psychological construct of self-perceptions and behaviors related to classroom language learning – is perceived as a significant variable highly correlated with Second Language Acquisition (SLA). However, the existing scholarship has inadequately explored the nuances of LA in relation to South Asia, especially in terms of Sri Lankan higher education contexts. Thus, the current study, situated within the broad areas of Psychology of SLA and Applied Linguistics, investigates the impact of competency-based LA and identity-based LA on learner achievement among undergraduates of Sri Lanka. Employing a case study approach to explore the impact of LA, 750 undergraduates of the University of Sri Jayewardenepura, Sri Lanka, thus covering 25% of the student population from all seven faculties of the university, were selected as participants using stratified proportionate sampling in terms of ethnicity, gender, and disciplines. The qualitative and quantitative research inquiry utilized for data collection include a questionnaire consisting a set of structured and unstructured questions, and semi-structured interviews as research instruments. Data analysis includes both descriptive and statistical measures. As per the quantitative measures of data analysis, the study employed Pearson Correlation Coefficient test, Chi-Square test, and Multiple Correspondence Analysis; it used LA as the dependent variable, and two types of independent variables were used: direct and indirect variables. Direct variables encompass the four main language skills- reading, writing, speaking and listening- and test anxiety. These variables were further explored through classroom activities on grammar, vocabulary and individual and group presentations. Indirect variables are identity, gender and cultural stereotypes, discipline, social background, income level, ethnicity, religion and parents’ education level. Learner achievement was measured through final scores the participants have obtained for Compulsory English- a common first-year course unit mandatory for all undergraduates. LA was measured using the FLCAS. In order to increase the validity and reliability of the study, data collected were triangulated through descriptive content analysis. Clearly evident through both the statistical analysis and the qualitative analysis of the results is the significant linear negative correlation between LA and learner achievement, and the significant negative correlation between LA and culturally-operated gender stereotypes which create identity disparities in learners. The study also found that both competency-based LA and identity-based LA are experienced primarily and inescapably due to the apprehensions regarding speaking in English. Most participants who reported high levels of LA were from an urban socio-economic background of lower income families. Findings exemplify the linguistic inequality prevalent in the socio-cultural milieu in Sri Lankan society. This inequality makes learning English a dire need, yet, very much an anxiety provoking process because of many sociolinguistic, cultural and ideological factors related to English as a Second Language (ESL) in Sri Lanka. The findings bring out the intricate interrelatedness of both the dependent variable (LA) and the independent variables stated above, emphasizing that the significant linear negative correlation between LA and learner achievement is connected to the affective, cognitive and sociolinguistic domains of SLA. Thus, the study highlights the promise in linguistic practices such as code-switching, crossing and accommodating hybrid identities as strategies in minimizing LA and maximizing the experience of ESL.Keywords: language anxiety, identity-based anxiety, competence-based anxiety, TESL, Sri Lanka
Procedia PDF Downloads 19019 Settings of Conditions Leading to Reproducible and Robust Biofilm Formation in vitro in Evaluation of Drug Activity against Staphylococcal Biofilms
Authors: Adela Diepoltova, Klara Konecna, Ondrej Jandourek, Petr Nachtigal
Abstract:
A loss of control over antibiotic-resistant pathogens has become a global issue due to severe and often untreatable infections. This state is reflected in complicated treatment, health costs, and higher mortality. All these factors emphasize the urgent need for the discovery and development of new anti-infectives. One of the most common pathogens mentioned in the phenomenon of antibiotic resistance are bacteria of the genus Staphylococcus. These bacterial agents have developed several mechanisms against the effect of antibiotics. One of them is biofilm formation. In staphylococci, biofilms are associated with infections such as endocarditis, osteomyelitis, catheter-related bloodstream infections, etc. To author's best knowledge, no validated and standardized methodology evaluating candidate compound activity against staphylococcal biofilms exists. However, a variety of protocols for in vitro drug activity testing has been suggested, yet there are often fundamental differences. Based on our experience, a key methodological step that leads to credible results is to form a robust biofilm with appropriate attributes such as firm adherence to the substrate, a complex arrangement in layers, and the presence of extracellular polysaccharide matrix. At first, for the purpose of drug antibiofilm activity evaluation, the focus was put on various conditions (supplementation of cultivation media by human plasma/fetal bovine serum, shaking mode, the density of initial inoculum) that should lead to reproducible and robust in vitro staphylococcal biofilm formation in microtiter plate model. Three model staphylococcal reference strains were included in the study: Staphylococcus aureus (ATCC 29213), methicillin-resistant Staphylococcus aureus (ATCC 43300), and Staphylococcus epidermidis (ATCC 35983). The total biofilm biomass was quantified using the Christensen method with crystal violet, and results obtained from at least three independent experiments were statistically processed. Attention was also paid to the viability of the biofilm-forming staphylococcal cells and the presence of extracellular polysaccharide matrix. The conditions that led to robust biofilm biomass formation with attributes for biofilms mentioned above were then applied by introducing an alternative method analogous to the commercially available test system, the Calgary Biofilm Device. In this test system, biofilms are formed on pegs that are incorporated into the lid of the microtiter plate. This system provides several advantages (in situ detection and quantification of biofilm microbial cells that have retained their viability after drug exposure). Based on our preliminary studies, it was found that the attention to the peg surface and substrate on which the bacterial biofilms are formed should also be paid to. Therefore, further steps leading to the optimization were introduced. The surface of pegs was coated by human plasma, fetal bovine serum, and L-polylysine. Subsequently, the willingness of bacteria to adhere and form biofilm was monitored. In conclusion, suitable conditions were revealed, leading to the formation of reproducible, robust staphylococcal biofilms in vitro for the microtiter model and the system analogous to the Calgary biofilm device, as well. The robustness and typical slime texture could be detected visually. Likewise, an analysis by confocal laser scanning microscopy revealed a complex three-dimensional arrangement of biofilm forming organisms surrounded by an extracellular polysaccharide matrix.Keywords: anti-biofilm drug activity screening, in vitro biofilm formation, microtiter plate model, the Calgary biofilm device, staphylococcal infections, substrate modification, surface coating
Procedia PDF Downloads 15518 The Use of Rule-Based Cellular Automata to Track and Forecast the Dispersal of Classical Biocontrol Agents at Scale, with an Application to the Fopius arisanus Fruit Fly Parasitoid
Authors: Agboka Komi Mensah, John Odindi, Elfatih M. Abdel-Rahman, Onisimo Mutanga, Henri Ez Tonnang
Abstract:
Ecosystems are networks of organisms and populations that form a community of various species interacting within their habitats. Such habitats are defined by abiotic and biotic conditions that establish the initial limits to a population's growth, development, and reproduction. The habitat’s conditions explain the context in which species interact to access resources such as food, water, space, shelter, and mates, allowing for feeding, dispersal, and reproduction. Dispersal is an essential life-history strategy that affects gene flow, resource competition, population dynamics, and species distributions. Despite the importance of dispersal in population dynamics and survival, understanding the mechanism underpinning the dispersal of organisms remains challenging. For instance, when an organism moves into an ecosystem for survival and resource competition, its progression is highly influenced by extrinsic factors such as its physiological state, climatic variables and ability to evade predation. Therefore, greater spatial detail is necessary to understand organism dispersal dynamics. Understanding organisms dispersal can be addressed using empirical and mechanistic modelling approaches, with the adopted approach depending on the study's purpose Cellular automata (CA) is an example of these approaches that have been successfully used in biological studies to analyze the dispersal of living organisms. Cellular automata can be briefly described as occupied cells by an individual that evolves based on proper decisions based on a set of neighbours' rules. However, in the ambit of modelling individual organisms dispersal at the landscape scale, we lack user friendly tools that do not require expertise in mathematical models and computing ability; such as a visual analytics framework for tracking and forecasting the dispersal behaviour of organisms. The term "visual analytics" (VA) describes a semiautomated approach to electronic data processing that is guided by users who can interact with data via an interface. Essentially, VA converts large amounts of quantitative or qualitative data into graphical formats that can be customized based on the operator's needs. Additionally, this approach can be used to enhance the ability of users from various backgrounds to understand data, communicate results, and disseminate information across a wide range of disciplines. To support effective analysis of the dispersal of organisms at the landscape scale, we therefore designed Pydisp which is a free visual data analytics tool for spatiotemporal dispersal modeling built in Python. Its user interface allows users to perform a quick and interactive spatiotemporal analysis of species dispersal using bioecological and climatic data. Pydisp enables reuse and upgrade through the use of simple principles such as Fuzzy cellular automata algorithms. The potential of dispersal modeling is demonstrated in a case study by predicting the dispersal of Fopius arisanus (Sonan), endoparasitoids to control Bactrocera dorsalis (Hendel) (Diptera: Tephritidae) in Kenya. The results obtained from our example clearly illustrate the parasitoid's dispersal process at the landscape level and confirm that dynamic processes in an agroecosystem are better understood when designed using mechanistic modelling approaches. Furthermore, as demonstrated in the example, the built software is highly effective in portraying the dispersal of organisms despite the unavailability of detailed data on the species dispersal mechanisms.Keywords: cellular automata, fuzzy logic, landscape, spatiotemporal
Procedia PDF Downloads 7717 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning
Authors: Pei Yi Lin
Abstract:
Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model
Procedia PDF Downloads 7616 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 4115 Consecration from the Margins: El Anatsui in Venice and the Turbine Hall
Authors: Jonathan Adeyemi
Abstract:
Context: This study focuses on El Anatsui and his global acclaim in the art world despite his origins from the global artworld’s margins. It addresses the disparities in the treatment between Western and non-Western artists and questions whether Anatsui’s consecration is a result of exoticism or the growing consensus on decolonization. Research Aim: The aim of this study is to investigate how El Anatsui achieved global acclaim from the margins of the art world and determine if his consecration represents a mark of decolonization or the typical Western desire for exoticism. Methodology: The study utilizes a case study approach, literature analysis, and in-depth interviews. The artist, the organizers of the Venice Biennale, the relevant curators at Tate Modern London, and the October Gallery in London, and other galleries in Nigeria, which represent the artist were interviewed for data collection. Findings: The study seeks to determine the authenticity of the growing consensus on decolonization, inclusion, and diversity in the global artistic field. Preliminary findings show that domestic socio-economic and political factors debilitated the mechanisms for local validation in Nigeria, weakening the domestic foundation for international engagement. However, alternative systems of exhibition, especially in London and the USA contributed critically to providing the initial international visibility, which formed the foundation for his global acclaim. Out of the 21 winners of the Golden Lion for Lifetime Achievement since its inception at the 47th Venice Biennale in 1997, American artists have dominated with 10 recipients, 8 recipients from Europe, 2 recipients from Africa (2007 and 2015) and 1 from Asia. This aligns with Bourdieu’s concept of cultural and economic capital, which prevented Africa countries from participation until recently. Moreover, while the average age of recipients is 76 years, Anatsui received the award at the age of 71, while Malick Sidibé (Mali) was awarded at 72. Thus, the Venice Biennale award for El Anatsui incline more towards a commitment to decolonisation than exoticism. Theoretical Importance: This study contributes to the field by examining the dynamics of the art world's monopoly of legitimation and the role of national, ethnicity and cultural differences in the promotion of artists. It aims to challenge the Westernized hierarchy of valorization and consecration in the art world. The research supports Bourdieu’s artistic field theory, which emphasises the importance of cultural, economic and social capital in determining agents’ position and access to the field resources (symbolic capital). Bourdieu also established that dominated agents can change their position in the field’s hierarchy either by establishing or navigating alternative systems. Data Collection and Analysis Procedures: The opacity of art world’s operations places the required information within the purview of the insiders (agents). Thus, the study collects data through in-depth interviews with relevant and purposively selected individuals and organizations. The data was/will be analyzed using qualitative methods, such as thematic analysis and content analysis. The interpretive analytical approach adopted facilitated the construction of meanings that may not be apparent in the data or responses. Questions Addressed: The study addresses how El Anatsui achieved global acclaim despite being from the margins, whether his consecration represents decolonization or exoticism, and the extent to which the global artistic field embraces decolonization, inclusion, and diversity. Conclusion: The study will contribute to knowledge by providing insights into the extent of commitment to decolonization, inclusion, and diversity in the global artistic field. It also shed light on the mechanisms behind El Anatsui's rise to global acclaim and challenge Western-dominated artistic hierarchies.Keywords: decolonisation, exorticism, artistic field, culture game
Procedia PDF Downloads 6014 Managing Crowds at Sports Mega Events: Examining the Impact of ‘Fan Parks’ at International Football Tournaments between 2002 and 2016
Authors: Joel Rookwood
Abstract:
Sports mega events have become increasingly significant in sporting, political and economic terms, with analysis often focusing on issues including resource expenditure, development, legacy and sustainability. Transnational tournaments can inspire interest from a variety of demographics, and the operational management of such events can involve contributions from a range of personnel. In addition to television audiences events also attract attending spectators, and in football contexts the temporary migration of fans from potentially rival nations and teams can present event organising committees and security personnel with various challenges in relation to crowd management. The behaviour, interaction and control of supporters has previously led to incidents of disorder and hooliganism, with damage to property as well as injuries and deaths proving significant consequences. The Heysel tragedy at the 1985 European Cup final in Brussels is a notable example, where 39 fans died following crowd disorder and mismanagement. Football disasters and disorder, particularly in the context of international competition, have inspired responses from police, law makers, event organisers, clubs and associations, including stadium improvements, legislative developments and crowd management practice to improve the effectiveness of spectator safety. The growth and internationalisation of fandom and developments in event management and tourism have seen various responses to the evolving challenges associated with hosting large numbers of visiting spectators at mega events. In football contexts ‘fan parks’ are a notable example. Since the first widespread introduction in European football competitions at the 2006 World Cup finals in Germany, these facilities have become a staple element of such mega events. This qualitative, longitudinal, multi-continent research draws on extensive semi-structured interview and observation data. As a frame of reference, this work considers football events staged before and after the development of fan parks. Research was undertaken at four World Cup finals (Japan 2002, Germany 2006, South Africa 2010 and Brazil 2014), four European Championships (Portugal 2004, Switzerland/Austria 2008, Poland/Ukraine 2012 and France 2016), four other confederation tournaments (Ghana 2008, Qatar 2011, USA 2011 and Chile 2015), and four European club finals (Istanbul 2005, Athens 2007, Rome 2009 and Basle 2016). This work found that these parks are typically temporarily erected, specifically located zones where supporters congregate together irrespective of allegiances to watch matches on large screens, and partake in other forms of organised on-site entertainment. Such facilities can also allow organisers to control the behaviour, confine the movement and monitor the alcohol consumption of supporters. This represents a notable shift in policy from previous football tournaments, when the widely assumed causal link between alcohol and hooliganism which frequently shaped legislative and police responses to disorder, also dissuaded some authorities from permitting fans to consume alcohol in and around stadia. It also reflects changing attitudes towards modern football fans. The work also found that in certain contexts supporters have increasingly engaged with such provision which impacts fan behaviour, but that this is relative to factors including location, facilities, management and security.Keywords: event, facility, fan, management, park
Procedia PDF Downloads 31313 Cycleloop Personal Rapid Transit: An Exploratory Study for Last Mile Connectivity in Urban Transport
Authors: Suresh Salla
Abstract:
In this paper, author explores for most sustainable last mile transport mode addressing present problems of traffic congestion, jams, pollution and travel stress. Development of energy-efficient sustainable integrated transport system(s) is/are must to make our cities more livable. Emphasis on autonomous, connected, electric, sharing system for effective utilization of systems (vehicles and public infrastructure) is on the rise. Many surface mobility innovations like PBS, Ride hailing, ride sharing, etc. are, although workable but if we analyze holistically, add to the already congested roads, difficult to ride in hostile weather, causes pollution and poses commuter stress. Sustainability of transportation is evaluated with respect to public adoption, average speed, energy consumption, and pollution. Why public prefer certain mode over others? How commute time plays a role in mode selection or shift? What are the factors play-ing role in energy consumption and pollution? Based on the study, it is clear that public prefer a transport mode which is exhaustive (i.e., less need for interchange – network is widespread) and intensive (i.e., less waiting time - vehicles are available at frequent intervals) and convenient with latest technologies. Average speed is dependent on stops, number of intersections, signals, clear route availability, etc. It is clear from Physics that higher the kerb weight of a vehicle; higher is the operational energy consumption. Higher kerb weight also demands heavier infrastructure. Pollution is dependent on source of energy, efficiency of vehicle, average speed. Mode can be made exhaustive when the unit infrastructure cost is less and can be offered intensively when the vehicle cost is less. Reliable and seamless integrated mobility till last ¼ mile (Five Minute Walk-FMW) is a must to encourage sustainable public transportation. Study shows that average speed and reliability of dedicated modes (like Metro, PRT, BRT, etc.) is high compared to road vehicles. Electric vehicles and more so battery-less or 3rd rail vehicles reduce pollution. One potential mode can be Cycleloop PRT, where commuter rides e-cycle in a dedicated path – elevated, at grade or underground. e-Bike with kerb weight per rider at 15 kg being 1/50th of car or 1/10th of other PRT systems makes it sustainable mode. Cycleloop tube will be light, sleek and scalable and can be modular erected, either on modified street lamp-posts or can be hanged/suspended between the two stations. Embarking and dis-embarking points or offline stations can be at an interval which suits FMW to mass public transit. In terms of convenience, guided e-Bike can be made self-balancing thus encouraging driverless on-demand vehicles. e-Bike equipped with smart electronics and drive controls can intelligently respond to field sensors and autonomously move reacting to Central Controller. Smart switching allows travel from origin to destination without interchange of cycles. DC Powered Batteryless e-cycle with voluntary manual pedaling makes it sustainable and provides health benefits. Tandem e-bike, smart switching and Platoon operations algorithm options provide superior through-put of the Cycleloop. Thus Cycleloop PRT will be exhaustive, intensive, convenient, reliable, speedy, sustainable, safe, pollution-free and healthy alternative mode for last mile connectivity in cities.Keywords: cycleloop PRT, five-minute walk, lean modular infrastructure, self-balanced intelligent e-cycle
Procedia PDF Downloads 13112 Hydro Solidarity and Turkey’s Role as a Waterpower in the Middle East: The Peace Water Pipeline Project
Authors: Filippo Verre
Abstract:
This paper explores Turkey’s role as an influential waterpower in the Middle East, emphasizing the Peace Water Pipeline Project (PWPP) as a paradigm of hydro solidarity rather than conventional water diplomacy. Hydro solidarity transcends the strategic and often competitive nature of water diplomacy, highlighting cooperative, inclusive, and mutually beneficial approaches to water resource management. The PWPP, which aimed to transport freshwater from Turkey’s Manavgat River to several water-scarce nations in the Middle East, exemplifies this ethos. By providing a reliable water supply to address the chronic shortages in the region, the project underscored Turkey’s commitment to fostering regional cooperation, stability, and collective well-being through shared water resources. This paper provides an in-depth analysis of the Peace Water Pipeline Project, examining its technical specifications, environmental impact, and political implications. It discusses how the project’s foundation on principles of hydro solidarity could facilitate stronger regional ties, mitigate water-related conflicts, and promote sustainable development. By prioritizing collective benefits over unilateral gains, Turkey’s approach exemplified a transformative model of resource sharing that could inspire similar initiatives globally. This paper argues that the Peace Water Pipeline Project serves as a crucial case study in demonstrating how shared natural resources can be leveraged to build trust, enhance cooperation, and achieve common goals in a geopolitically volatile region. The findings emphasize the importance of adopting hydro solidarity as a guiding principle for future transboundary water projects, showcasing how collaborative water management can play a pivotal role in fostering peace, security, and sustainable development in the Middle East and beyond. This research is based on a mixed methodological approach combining qualitative and quantitative methods. The most relevant qualitative methods will involve Case Studies and Content Analysis. Concretely, the Friendship Dam Project (FDP) between Turkey and Syria will be mentioned to underline the importance of hydro solidarity approaches as opposed to water diplomacy. Analyzing this case aims to identify factors that contribute to successful hydro solidarity agreements, such as effective communication channels, trust-building measures, and adaptive management practices. Concerning Content Analysis, reviewing and analyzing policy documents, treaties, media reports, and public statements will help identify the official narratives and discourses surrounding the PWPP. This method fully comprehends how different stakeholders frame the issues and what solutions they propose. The quantitative methodology used in this research, which complements the qualitative approaches, involves economic valuation, which quantifies the PWPP’s economic impacts on Turkey and the Middle Eastern region. This includes assessing the cost of construction and maintenance and the financial benefits derived from improved water access and reduced conflict. Hydrological modelling will also be used as a quantitative research method. Using hydrological models to simulate the water flow and distribution scenarios helps quantify the pipeline’s potential impacts on water resources. By assessing the sustainability of water extraction and predicting how changes in water availability might affect different regions, these models play a crucial role in this research, shedding light on the impact of transboundary infrastructures on water management.Keywords: hydro-solidarity, Middle East, transboundary water management, peace water pipeline project, water scarcity
Procedia PDF Downloads 4011 A Case Study on Utility of 18FDG-PET/CT Scan in Identifying Active Extra Lymph Nodes and Staging of Breast Cancer
Authors: Farid Risheq, M. Zaid Alrisheq, Shuaa Al-Sadoon, Karim Al-Faqih, Mays Abdulazeez
Abstract:
Breast cancer is the most frequently diagnosed cancer worldwide, and a common cause of death among women. Various conventional anatomical imaging tools are utilized for diagnosis, histological assessment and TNM (Tumor, Node, Metastases) staging of breast cancer. Biopsy of sentinel lymph node is becoming an alternative to the axillary lymph node dissection. Advances in 18-Fluoro-Deoxi-Glucose Positron Emission Tomography/Computed Tomography (18FDG-PET/CT) imaging have facilitated breast cancer diagnosis utilizing biological trapping of 18FDG inside lesion cells, expressed as Standardized Uptake Value (SUVmax). Objective: To present the utility of 18FDG uptake PET/CT scans in detecting active extra lymph nodes and distant occult metastases for breast cancer staging. Subjects and Methods: Four female patients were presented with initially classified TNM stages of breast cancer based on conventional anatomical diagnostic techniques. 18FDG-PET/CT scans were performed one hour post 18FDG intra-venous injection of (300-370) MBq, and (7-8) bed/130sec. Transverse, sagittal, and coronal views; fused PET/CT and MIP modality were reconstructed for each patient. Results: A total of twenty four lesions in breast, extended lesions to lung, liver, bone and active extra lymph nodes were detected among patients. The initial TNM stage was significantly changed post 18FDG-PET/CT scan for each patient, as follows: Patient-1: Initial TNM-stage: T1N1M0-(stage I). Finding: Two lesions in right breast (3.2cm2, SUVmax=10.2), (1.8cm2, SUVmax=6.7), associated with metastases to two right axillary lymph nodes. Final TNM-stage: T1N2M0-(stage II). Patient-2: Initial TNM-stage: T2N2M0-(stage III). Finding: Right breast lesion (6.1cm2, SUVmax=15.2), associated with metastases to right internal mammary lymph node, two right axillary lymph nodes, and sclerotic lesions in right scapula. Final TNM-stage: T2N3M1-(stage IV). Patient-3: Initial TNM-stage: T2N0M1-(stage III). Finding: Left breast lesion (11.1cm2, SUVmax=18.8), associated with metastases to two lymph nodes in left hilum, and three lesions in both lungs. Final TNM-stage: T2N2M1-(stage IV). Patient-4: Initial TNM-stage: T4N1M1-(stage III). Finding: Four lesions in upper outer quadrant area of right breast (largest: 12.7cm2, SUVmax=18.6), in addition to one lesion in left breast (4.8cm2, SUVmax=7.1), associated with metastases to multiple lesions in liver (largest: 11.4cm2, SUV=8.0), and two bony-lytic lesions in left scapula and cervicle-1. No evidence of regional or distant lymph node involvement. Final TNM-stage: T4N0M2-(stage IV). Conclusions: Our results demonstrated that 18FDG-PET/CT scans had significantly changed the TNM stages of breast cancer patients. While the T factor was unchanged, N and M factors showed significant variations. A single session of PET/CT scan was effective in detecting active extra lymph nodes and distant occult metastases, which were not identified by conventional diagnostic techniques, and might advantageously replace bone scan, and contrast enhanced CT of chest, abdomen and pelvis. Applying 18FDG-PET/CT scan early in the investigation, might shorten diagnosis time, helps deciding adequate treatment protocol, and could improve patients’ quality of life and survival. Trapping of 18FDG in malignant lesion cells, after a PET/CT scan, increases the retention index (RI%) for a considerable time, which might help localize sentinel lymph node for biopsy using a hand held gamma probe detector. Future work is required to demonstrate its utility.Keywords: axillary lymph nodes, breast cancer staging, fluorodeoxyglucose positron emission tomography/computed tomography, lymph nodes
Procedia PDF Downloads 31310 Identifying the Conservation Gaps in Poorly Studied Protected Area in the Philippines: A Study Case of Sibuyan Island
Authors: Roven Tumaneng, Angelica Kristina Monzon, Ralph Sedricke Lapuz, Jose Don De Alban, Jennica Paula Masigan, Joanne Rae Pales, Laila Monera Pornel, Dennis Tablazon, Rizza Karen Veridiano, Jackie Lou Wenceslao, Edmund Leo Rico, Neil Aldrin Mallari
Abstract:
Most protected area management plans in the Philippines, particularly the smaller and more remote islands suffer from insufficient baseline data, which should provide the bases for formulating measureable conservation targets and appropriate management interventions for these protected areas. Attempts to synthesize available data particularly on cultural and socio-economic characteristic of local peoples within and outside protected areas also suffer from the lack of comprehensive and detailed inventories, which should be considered in designing adaptive management interventions to be used for those protected areas. Mt Guiting-guiting Natural Park (MGGNP) located in Sibuyan Island is one of the poorly studied protected areas in the Philippines. In this study, we determined the highly biologically important areas of the protected area using Maximum Entropy approach (MaxEnt) from environmental predictors (i.e., topographic, bioclimatic,land cover, and soil image layers) derived from global remotely sensed data and point occurrence data of species of birds and trees recorded during field surveys on the island. A total of 23 trigger species of birds and trees was modeled and stacked to generate species richness maps for biological high conservation value areas (HCVAs). Forest habitat change was delineated using dual-polarised L-band ALOS-PALSAR mosaic data at 25 meter spatial resolution, taken at two acquisition years 2007 and 2009 to provide information on forest cover ad habitat change in the island between year 2007 and 2009. Determining the livelihood guilds were also conducted using the data gathered from171 household interviews, from which demographic and livelihood variables were extracted (i.e., age, gender, number of household members, educational attainment, years of residency, distance from forest edge, main occupation, alternative sources of food and resources during scarcity months, and sources of these alternative resources).Using Principal Component Analysis (PCA) and Kruskal-Wallis test, the diversity and patterns of forest resource use by people in the island were determined with particular focus on the economic activities that directly and indirectly affect the population of key species as well as to identify levels of forest resource use by people in different areas of the park.Results showed that there are gaps in the area occupied by the natural park, as evidenced by the mismatch of the proposed HCVAs and the existing perimeters of the park. We found out that subsistence forest gathering was the possible main driver for forest degradation out of the eight livelihood guilds that were identified in the park. Determining the high conservation areas and identifyingthe anthropogenic factors that influence the species richness and abundance of key species in the different management zone of MGGNP would provide guidance for the design of a protected area management plan and future monitoring programs. However, through intensive communication and consultation with government stakeholders and local communities our results led to setting conservation targets in local development plans and serve as a basis for the reposition of the boundaries and reconfiguration of the management zones of MGGNP.Keywords: conservation gaps, livelihood guilds, MaxEnt, protected area
Procedia PDF Downloads 4079 Coastal Foodscapes as Nature-Based Coastal Regeneration Systems
Authors: Gulce Kanturer Yasar, Hayriye Esbah Tuncay
Abstract:
Cultivated food production systems have coexisted harmoniously with nature for thousands of years through ancient techniques. Based on this experience, experimentation, and discovery, these culturally embedded methods have evolved to sustain food production, restore ecosystems, and harmoniously adapt to nature. In this era, as we seek solutions to food security challenges, enhancing and repairing our food production systems is crucial, making them more resilient to future disasters without harming the ecosystem. Instead of unsustainable conventional systems with ongoing destructive effects, we must investigate innovative and restorative production systems that integrate ancient wisdom and technology. Whether we consider agricultural fields, pastures, forests, coastal wetland ecosystems, or lagoons, it is crucial to harness the potential of these natural resources in addressing future global challenges, fostering both socio-economic resilience and ecological sustainability through strategic organization for food production. When thoughtfully designed and managed, marine-based food production has the potential to function as a living infrastructure system that addresses social and environmental challenges despite its known adverse impacts on the environment and local economies. These areas are also stages of daily life, vibrant hubs where local culture is produced and shared, contributing to the distinctive rural character of coastal settlements and exhibiting numerous spatial expressions of public nature. When we consider the history of humanity, indigenous communities have engaged in these sustainable production practices that provide goods for food, trade, culture, and the environment for many ages. Ecosystem restoration and socio-economic resilience can be achieved by combining production techniques based on ecological knowledge developed by indigenous societies with modern technologies. Coastal lagoons are highly productive coastal features that provide various natural services and societal values. They are especially vulnerable to severe physical, ecological, and social impacts of changing, challenging global conditions because of their placement within the coastal landscape. Coastal lagoons are crucial in sustaining fisheries productivity, providing storm protection, supporting tourism, and offering other natural services that hold significant value for society. Although there is considerable literature on the physical and ecological dimensions of lagoons, much less literature focuses on their economic and social values. This study will discuss the possibilities of coastal lagoons to achieve both ecologically sustainable and socio-economically resilient while maintaining their productivity by combining local techniques and modern technologies. The case study will present Turkey’s traditional aquaculture method, "Dalyans," predominantly operated by small-scale farmers in coastal lagoons. Due to human, ecological, and economic factors, dalyans are losing their landscape characteristics and efficiency. These 1000-year-old ancient techniques, rooted in centuries of traditional and agroecological knowledge, are under threat of tourism, urbanization, and unsustainable agricultural practices. Thus, Dalyans have diminished from 29 to approximately 4-5 active Dalyans. To deal with the adverse socio-economic and ecological consequences on Turkey's coastal areas, conserving Dalyans by protecting their indigenous practices while incorporating contemporary methods is essential. This study seeks to generate scenarios that envision the potential ways protection and development can manifest within case study areas.Keywords: coastal foodscape, lagoon aquaculture, regenerative food systems, watershed food networks
Procedia PDF Downloads 758 Observations on Cultural Alternative and Environmental Conservation: Populations "Delayed" and Excluded from Health and Public Hygiene Policies in Mexico (1890-1930)
Authors: Marcela Davalos Lopez
Abstract:
The history of the circulation of hygienic knowledge and the consolidation of public health in Latin American cities towards the end of the 19th century is well known. Among them, Mexico City was inserted in international politics, strengthened institutions, medical knowledge, applied parameters of modernity and built sanitary engineering works. Despite the power that this hygienist system achieved, its scope was relative: it cannot be generalized to all cities. From a comparative and contextual analysis, it will be shown that conclusions derived from modern urban historiography present, from our contemporary observations, fractures. Between 1890 and 1930, the small cities and areas surrounding the Mexican capital adapted in their own way the international and federal public health regulations. This will be shown for neighborhoods located around Mexico City and in a medium city, close to the Mexican capital (about 80 km), called Cuernavaca. While the inhabitants of the neighborhoods kept awaiting the evolutionary process and the forms that public hygiene policies were taking (because they were witnesses and affected in their territories), in Cuernavaca, the dictates came as an echo. While the capital was drained, large roads were opened, roundabouts were erected, residents were expelled, and drains, sewers, drinking water pipes, etc., were built; Cuernavaca was sheltered in other times and practices. What was this due to? Undoubtedly, the time and energy that it took politicians and the group of "scientists" to carry out these enormous works in the Mexican capital took them away from addressing the issue in remote villages. It was not until the 20th century that the federal hygiene policy began to be strengthened. Despite this, there are other factors that emphasize the particularities of each site. I would like to draw attention here to the different receptions that each town prepared on public hygiene. We will see that Cuernavaca responded to its own semi-rural culture, history, orography and functions, prolonging for much longer, for example, the use of its deep ravines as sewers. For their part, the neighborhoods surrounding the capital, although affected and excluded from hygienist policies, chose to move away from them and solve the deficiencies with their own resources (they resorted to the waste that was left from the dried lake of Mexico to continue their lake practices). All of this points to a paradox that shapes our contemporary concerns: on the one hand, the benefits derived from medical knowledge and its technological applications (in this work referring particularly to the urban health system) and, on the other, the alteration it caused in environmental settings. Places like Cuernavaca (classified by the nineteenth-century and hygienists of the first decades of the twentieth century as backward), as well as landscapes such as neighborhoods, affected by advances in sanitary engineering, keep in their memory buried practices that we observe today as possible ways to reestablish environmental balances: alternative uses of water; recycling of organic materials; local uses of fauna; various systems for breaking down excreta, and so on. In sum, what the nineteenth and first half of the twentieth centuries graduated as levels of backwardness or progress, turn out to be key information to rethink the routes of environmental conservation. When we return to the observations of the scientists, politicians and lawyers of that period, we find historically rejected cultural alterity. Populations such as Cuernavaca that, due to their history, orography and/or insufficiency of federal policies, kept different relationships with the environment, today give us clues to reorient basic elements of cities: alternative uses of water, waste of raw materials, organic or consumption of local products, among others. It is, therefore, a matter of unearthing the rejected that cries out to emerge to the surface.Keywords: sanitary hygiene, Mexico city, cultural alterity, environmental conservation, environmental history
Procedia PDF Downloads 1647 From Linear to Circular Model: An Artificial Intelligence-Powered Approach in Fosso Imperatore
Authors: Carlotta D’Alessandro, Giuseppe Ioppolo, Katarzyna Szopik-Depczyńska
Abstract:
— The growing scarcity of resources and the mounting pressures of climate change, water pollution, and chemical contamination have prompted societies, governments, and businesses to seek ways to minimize their environmental impact. To combat climate change, and foster sustainability, Industrial Symbiosis (IS) offers a powerful approach, facilitating the shift toward a circular economic model. IS has gained prominence in the European Union's policy framework as crucial enabler of resource efficiency and circular economy practices. The essence of IS lies in the collaborative sharing of resources such as energy, material by-products, waste, and water, thanks to geographic proximity. It can be exemplified by eco-industrial parks (EIPs), which are natural environments for boosting cooperation and resource sharing between businesses. EIPs are characterized by group of businesses situated in proximity, connected by a network of both cooperative and competitive interactions. They represent a sustainable industrial model aimed at reducing resource use, waste, and environmental impact while fostering economic and social wellbeing. IS, combined with Artificial Intelligence (AI)-driven technologies, can further optimize resource sharing and efficiency within EIPs. This research, supported by the “CE_IPs” project, aims to analyze the potential for IS and AI, in advancing circularity and sustainability at Fosso Imperatore. The Fosso Imperatore Industrial Park in Nocera Inferiore, Italy, specializes in agriculture and the industrial transformation of agricultural products, particularly tomatoes, tobacco, and textile fibers. This unique industrial cluster, centered around tomato cultivation and processing, also includes mechanical engineering enterprises and agricultural packaging firms. To stimulate the shift from a traditional to a circular economic model, an AI-powered Local Development Plan (LDP) is developed for Fosso Imperatore. It can leverage data analytics, predictive modeling, and stakeholder engagement to optimize resource utilization, reduce waste, and promote sustainable industrial practices. A comprehensive SWOT analysis of the AI-powered LDP revealed several key factors influencing its potential success and challenges. Among the notable strengths and opportunities arising from AI implementation are reduced processing times, fewer human errors, and increased revenue generation. Furthermore, predictive analytics minimize downtime, bolster productivity, and elevate quality while mitigating workplace hazards. However, the integration of AI also presents potential weaknesses and threats, including significant financial investment, since implementing and maintaining AI systems can be costly. The widespread adoption of AI could lead to job losses in certain sectors. Lastly, AI systems are susceptible to cyberattacks, posing risks to data security and operational continuity. Moreover, an Analytic Hierarchy Process (AHP) analysis was employed to yield a prioritized ranking of the outlined AI-driven LDP practices based on the stakeholder input, ensuring a more comprehensive and representative understanding of their relative significance for achieving sustainability in Fosso Imperatore Industrial Park. While this study provides valuable insights into the potential of AIpowered LDP at the Fosso Imperatore, it is important to note that the findings may not be directly applicable to all industrial parks, particularly those with different sizes, geographic locations, or industry compositions. Additional study is necessary to scrutinize the generalizability of these results and to identify best practices for implementing AI-driven LDP in diverse contexts.Keywords: artificial intelligence, climate change, Fosso Imperatore, industrial park, industrial symbiosis
Procedia PDF Downloads 266 Unleashing Potential in Pedagogical Innovation for STEM Education: Applying Knowledge Transfer Technology to Guide a Co-Creation Learning Mechanism for the Lingering Effects Amid COVID-19
Authors: Lan Cheng, Harry Qin, Yang Wang
Abstract:
Background: COVID-19 has induced the largest digital learning experiment in history. There is also emerging research evidence that students have paid a high cost of learning loss from virtual learning. University-wide survey results demonstrate that digital learning remains difficult for students who struggle with learning challenges, isolation, or a lack of resources. Large-scale efforts are therefore increasingly utilized for digital education. To better prepare students in higher education for this grand scientific and technological transformation, STEM education has been prioritized and promoted as a strategic imperative in the ongoing curriculum reform essential for unfinished learning needs and whole-person development. Building upon five key elements identified in the STEM education literature: Problem-based Learning, Community and Belonging, Technology Skills, Personalization of Learning, Connection to the External Community, this case study explores the potential of pedagogical innovation that integrates computational and experimental methodologies to support, enrich, and navigate STEM education. Objectives: The goal of this case study is to create a high-fidelity prototype design for STEM education with knowledge transfer technology that contains a Cooperative Multi-Agent System (CMAS), which has the objectives of (1) conduct assessment to reveal a virtual learning mechanism and establish strategies to facilitate scientific learning engagement, accessibility, and connection within and beyond university setting, (2) explore and validate an interactional co-creation approach embedded in project-based learning activities under the STEM learning context, which is being transformed by both digital technology and student behavior change,(3) formulate and implement the STEM-oriented campaign to guide learning network mapping, mitigate the loss of learning, enhance the learning experience, scale-up inclusive participation. Methods: This study applied a case study strategy and a methodology informed by Social Network Analysis Theory within a cross-disciplinary communication paradigm (students, peers, educators). Knowledge transfer technology is introduced to address learning challenges and to increase the efficiency of Reinforcement Learning (RL) algorithms. A co-creation learning framework was identified and investigated in a context-specific way with a learning analytic tool designed in this study. Findings: The result shows that (1) CMAS-empowered learning support reduced students’ confusion, difficulties, and gaps during problem-solving scenarios while increasing learner capacity empowerment, (2) The co-creation learning phenomenon have examined through the lens of the campaign and reveals that an interactive virtual learning environment fosters students to navigate scientific challenge independently and collaboratively, (3) The deliverables brought from the STEM educational campaign provide a methodological framework both within the context of the curriculum design and external community engagement application. Conclusion: This study brings a holistic and coherent pedagogy to cultivates students’ interest in STEM and develop them a knowledge base to integrate and apply knowledge across different STEM disciplines. Through the co-designing and cross-disciplinary educational content and campaign promotion, findings suggest factors to empower evidence-based learning practice while also piloting and tracking the impact of the scholastic value of co-creation under the dynamic learning environment. The data nested under the knowledge transfer technology situates learners’ scientific journey and could pave the way for theoretical advancement and broader scientific enervators within larger datasets, projects, and communities.Keywords: co-creation, cross-disciplinary, knowledge transfer, STEM education, social network analysis
Procedia PDF Downloads 1145 Supply Side Readiness for Universal Health Coverage: Assessing the Availability and Depth of Essential Health Package in Rural, Remote and Conflict Prone District
Authors: Veenapani Rajeev Verma
Abstract:
Context: Assessing facility readiness is paramount as it can indicate capacity of facilities to provide essential care for resilience to health challenges. In the context of decentralization, estimation of supply side readiness indices at sub national level is imperative for effective evidence based policy but remains a colossal challenge due to lack of dependable and representative data sources. Setting: District Poonch of Jammu and Kashmir was selected for this study. It is remote, rural district with unprecedented topographical barriers and is identified as high priority by government. It is also a fragile area as is bounded by Line of Control with Pakistan bearing the brunt of cease fire violations, military skirmishes and sporadic militant attacks. Hilly geographical terrain, rudimentary/absence of road network and impoverishment are quintessential to this area. Objectives: Objective of the study is to a) Evaluate the service readiness of health facilities and create a concise index subsuming plethora of discrete indicators and b) Ascertain supply side barriers in service provisioning via stakeholder’s analysis. Study also strives to expand analytical domain unravelling context and area specific intricacies associated with service delivery. Methodology: Mixed method approach was employed to triangulate quantitative analysis with qualitative nuances. Facility survey encompassing 90 Subcentres, 44 Primary health centres, 3 Community health centres and 1 District hospital was conducted to gauge general service availability and service specific availability (depth of coverage). Compendium of checklist was designed using Indian Public Health Standards (IPHS) in form of standard core questionnaire and scorecard generated for each facility. Information was collected across dimensions of amenities, equipment, medicines, laboratory and infection control protocols as proposed in WHO’s Service Availability and Readiness Assesment (SARA). Two stage polychoric principal component analysis employed to generate a parsimonious index by coalescing an array of tracer indicators. OLS regression method used to determine factors explaining composite index generated from PCA. Stakeholder analysis was conducted to discern qualitative information. Myriad of techniques like observations, key informant interviews and focus group discussions using semi structured questionnaires on both leaders and laggards were administered for critical stakeholder’s analysis. Results: General readiness score of health facilities was found to be 0.48. Results indicated poorest readiness for subcentres and PHC’s (first point of contact) with composite score of 0.47 and 0.41 respectively. For primary care facilities; principal component was characterized by basic newborn care as well as preparedness for delivery. Results revealed availability of equipment and surgical preparedness having lowest score (0.46 and 0.47) for facilities providing secondary care. Presence of contractual staff, more than 1 hr walk to facility, facilities in zone A (most vulnerable) to cross border shelling and facilities inaccessible due to snowfall and thick jungles was negatively associated with readiness index. Nonchalant staff attitude, unavailability of staff quarters, leakages and constraint in supply chain of drugs and consumables were other impediments identified. Conclusions/Policy Implications: It is pertinent to first strengthen primary care facilities in this setting. Complex dimensions such as geographic barriers, user and provider behavior is not under precinct of this methodology.Keywords: effective coverage, principal component analysis, readiness index, universal health coverage
Procedia PDF Downloads 1214 Examining Language as a Crucial Factor in Determining Academic Performance: A Case of Business Education in Hong Kong
Authors: Chau So Ling
Abstract:
I.INTRODUCTION: Educators have always been interested in exploring factors that contribute to students’ academic success. It is beyond question that language, as a medium of instruction, will affect student learning. This paper tries to investigate whether language is a crucial factor in determining students’ achievement in their studies. II. BACKGROUND AND SIGNIFICANCE OF STUDY: The issue of using English as a medium of instruction in Hong Kong is a special topic because Hong Kong is a post-colonial and international city which a British colony. In such a specific language environment, researchers in the education field have always been interested in investigating students’ language proficiency and its relation to academic achievement and other related educational indicators such as motivation to learn, self-esteem, learning effectiveness, self-efficacy, etc. Along this line of thought, this study specifically focused on business education. III. METHODOLOGY: The methodology in this study involved two sequential stages, namely, a focus group interview and a data analysis. The whole study was directed towards both qualitative and quantitative aspects. The subjects of the study were divided into two groups. For the first group participating in the interview, a total of ten high school students were invited. They studied Business Studies, and their English standard was varied. The theme of the discussion was “Does English affect your learning and examination results of Business Studies?” The students were facilitated to discuss the extent to which English standard affected their learning of Business subjects and requested to rate the correlation between English and performance of Business Studies on a five-point scale. The second stage of the study involved another group of students. They were high school graduates who had taken the public examination for entering universities. A database containing their public examination results for different subjects has been obtained for the purpose of statistical analysis. Hypotheses were tested and evidence was obtained from the focus group interview to triangulate the findings. V. MAJOR FINDINGS AND CONCLUSION: By sharing of personal experience, the discussion of focus group interviews indicated that higher English standards could help the students achieve better learning and examination performance. In order to end the interview, the students were asked to indicate the correlation between English proficiency and performance of Business Studies on a five-point scale. With point one meant least correlated, ninety percent of the students gave point four for the correlation. The preliminary results illustrated that English plays an important role in students’ learning of Business Studies, or at least this was what the students perceived, which set the hypotheses for the study. After conducting the focus group interview, further evidence had to be gathered to support the hypotheses. The data analysis part tried to find out the relationship by correlating the students’ public examination results of Business Studies and levels of English standard. The results indicated a positive correlation between their English standard and Business Studies examination performance. In order to highlight the importance of the English language to the study of Business Studies, the correlation between the public examination results of other non-business subjects was also tested. Statistical results showed that language does play a role in affecting students’ performance in studying Business subjects than the other subjects. The explanation includes the dynamic subject nature, examination format and study requirements, the specialist language used, etc. Unlike Science and Geography, students in their learning process might find it more difficult to relate business concepts or terminologies to their own experience, and there are not many obvious physical or practical activities or visual aids to serve as evidence or experiments. It is well-researched in Hong Kong that English proficiency is a determinant of academic success. Other research studies verified such a notion. For example, research revealed that the more enriched the language experience, the better the cognitive performance in conceptual tasks. The ability to perform this kind of task is particularly important to students taking Business subjects. Another research was carried out in the UK, which was geared towards identifying and analyzing the reasons for underachievement across a cohort of GCSE students taking Business Studies. Results showed that weak language ability was the main barrier to raising students’ performance levels. It seemed that the interview result was successfully triangulated with data findings. Although education failure cannot be restricted to linguistic failure and language is just one of the variables to play in determining academic achievement, it is generally accepted that language does affect students’ academic performance. It is just a matter of extent. This paper provides recommendations for business educators on students’ language training and sheds light on more research possibilities in this area.Keywords: academic performance, language, learning, medium of instruction
Procedia PDF Downloads 1213 Evaluation of Academic Research Projects Using the AHP and TOPSIS Methods
Authors: Murat Arıbaş, Uğur Özcan
Abstract:
Due to the increasing number of universities and academics, the fund of the universities for research activities and grants/supports given by government institutions have increased number and quality of academic research projects. Although every academic research project has a specific purpose and importance, limited resources (money, time, manpower etc.) require choosing the best ones from all (Amiri, 2010). It is a pretty hard process to compare and determine which project is better such that the projects serve different purposes. In addition, the evaluation process has become complicated since there are more than one evaluator and multiple criteria for the evaluation (Dodangeh, Mojahed and Yusuff, 2009). Mehrez and Sinuany-Stern (1983) determined project selection problem as a Multi Criteria Decision Making (MCDM) problem. If a decision problem involves multiple criteria and objectives, it is called as a Multi Attribute Decision Making problem (Ömürbek & Kınay, 2013). There are many MCDM methods in the literature for the solution of such problems. These methods are AHP (Analytic Hierarchy Process), ANP (Analytic Network Process), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation), UTADIS (Utilities Additives Discriminantes), ELECTRE (Elimination et Choix Traduisant la Realite), MAUT (Multiattribute Utility Theory), GRA (Grey Relational Analysis) etc. Teach method has some advantages compared with others (Ömürbek, Blacksmith & Akalın, 2013). Hence, to decide which MCDM method will be used for solution of the problem, factors like the nature of the problem, types of choices, measurement scales, type of uncertainty, dependency among the attributes, expectations of decision maker, and quantity and quality of the data should be considered (Tavana & Hatami-Marbini, 2011). By this study, it is aimed to develop a systematic decision process for the grant support applications that are expected to be evaluated according to their scientific adequacy by multiple evaluators under certain criteria. In this context, project evaluation process applied by The Scientific and Technological Research Council of Turkey (TÜBİTAK) the leading institutions in our country, was investigated. Firstly in the study, criteria that will be used on the project evaluation were decided. The main criteria were selected among TÜBİTAK evaluation criteria. These criteria were originality of project, methodology, project management/team and research opportunities and extensive impact of project. Moreover, for each main criteria, 2-4 sub criteria were defined, hence it was decided to evaluate projects over 13 sub-criterion in total. Due to superiority of determination criteria weights AHP method and provided opportunity ranking great number of alternatives TOPSIS method, they are used together. AHP method, developed by Saaty (1977), is based on selection by pairwise comparisons. Because of its simple structure and being easy to understand, AHP is the very popular method in the literature for determining criteria weights in MCDM problems. Besides, the TOPSIS method developed by Hwang and Yoon (1981) as a MCDM technique is an alternative to ELECTRE method and it is used in many areas. In the method, distance from each decision point to ideal and to negative ideal solution point was calculated by using Euclidian Distance Approach. In the study, main criteria and sub-criteria were compared on their own merits by using questionnaires that were developed based on an importance scale by four relative groups of people (i.e. TUBITAK specialists, TUBITAK managers, academics and individuals from business world ) After these pairwise comparisons, weight of the each main criteria and sub-criteria were calculated by using AHP method. Then these calculated criteria’ weights used as an input in TOPSİS method, a sample consisting 200 projects were ranked on their own merits. This new system supported to opportunity to get views of the people that take part of project process including preparation, evaluation and implementation on the evaluation of academic research projects. Moreover, instead of using four main criteria in equal weight to evaluate projects, by using weighted 13 sub-criteria and decision point’s distance from the ideal solution, systematic decision making process was developed. By this evaluation process, new approach was created to determine importance of academic research projects.Keywords: Academic projects, Ahp method, Research projects evaluation, Topsis method.
Procedia PDF Downloads 5902 Glycyrrhizic Acid Inhibits Lipopolysaccharide-Stimulated Bovine Fibroblast-Like Synoviocyte, Invasion through Suppression of TLR4/NF-κB-Mediated Matrix Metalloproteinase-9 Expression
Authors: Hosein Maghsoudi
Abstract:
Rheumatois arthritis (RA) is progressive inflammatory autoimmune diseases that primarily affect the joints, characterized by synovial hyperplasia and inflammatory cell infiltration, deformed and painful joints, which can lead tissue destruction, functional disability systemic complications, and early dead and socioeconomic costs. The cause of rheumatoid arthritis is unknown, but genetic and environmental factors are contributory and the prognosis is guarded. However, advances in understanding the pathogenesis of the disease have fostered the development of new therapeutics, with improved outcomes. The current treatment strategy, which reflects this progress, is to initiate aggressive therapy soon after diagnosis and to escalate the therapy, guided by an assessment of disease activity, in pursuit of clinical remission. The pathobiology of RA is multifaceted and involves T cells, B cells, fibroblast-like synoviocyte (FLSc) and the complex interaction of many pro-inflammatory cytokine. Novel biologic agents that target tumor necrosis or interlukin (IL)-1 and Il-6, in addition T- and B-cells inhibitors, have resulted in favorable clinical outcomes in patients with RA. Despite this, at least 30% of RA patients are résistance to available therapies, suggesting novel mediators should be identified that can target other disease-specific pathway or cell lineage. Among the inflammatory cell population that might participated in RA pathogenesis, FLSc are crucial in initiaing and driving RA in concert of cartilage and bone by secreting metalloproteinase (MMPs) into the synovial fluid and by direct invasion into extracellular matrix (ECM), further exacerbating joint damage. Invasion of fibroblast-like synoviocytes (FLSc) is critical in the pathogenesis of rheumatoid-arthritis. The metalloproteinase (MMPs) and activator of Toll-like receptor 4 (TLR4)/nuclear factor- κB pthway play a critical role in RA-FLS invasion induced by lipopolysaccharide (LPS). The present study aimed to explore the anti-invasion activity of Glycyrrhizic Acid as a pharmacologically safe phytochemical agent with potent anti-inflammatory properties on IL-1beta and TNF-alpha signalling pathways in Bovine fibroblast-like synoviocyte ex- vitro, on LPS-stimulated bovine FLS migration and invasion as well as MMP expression and explored the upstream signal transduction. Results showed that Glycyrrhizic Acid suppressed LPS-stimulated bovine FLS migration and invasion by inhibition MMP-9 expression and activity. In addition our results revealed that Glycyrrhizic Acid inhibited the transcriptional activity of MMP-9 by suppression the nbinding activity of NF- κB in the MMP-9 promoter pathway. The extract of licorice (Glycyrrhiza glabra L.) has been widely used for many centuries in the traditional Chinese medicine as native anti-allergic agent. Glycyrrhizin (GL), a triterpenoidsaponin, extracted from the roots of licorice is the most effective compound for inflammation and allergic diseases in human body. The biological and pharmacological studies revealed that GL possesses many pharmacological effects, such as anti-inflammatory, anti-viral and liver protective effects, and the biological effects, such as induction of cytokines (interferon-γ and IL-12), chemokines as well as extrathymic T and anti-type 2 T cells. GL is known in the traditional Chinese medicine for its anti-inflammatory effect, which is originally described by Finney in 1959. The mechanism of the GL-induced anti-inflammatory effect is based on different pathways of the GL-induced selective inhibition of the prostaglandin E2 production, the CK-II- mediated activation of both GL-binding lipoxygenas (gbLOX; 17) and PLA2, an anti-thrombin action of GL and production of the reactive oxygen species (ROS; GL exerts liver protection properties by inhibiting PLA2 or by the hydroxyl radical trapping action, leading to the lowering of serum alanine and aspartate transaminase levels. The present study was undertaken to examine the possible mechanism of anti-inflammatory properties GL on IL-1beta and TNF-alpha signalling pathways in bovine fibroblast-like synoviocyte ex-vivo, on LPS-stimulated bovine FLS migration and invasion as well as MMP expression and explored the upstream signal transduction. Our results clearly showed that treatment of bovine fibroblast-like synoviocyte with GL suppressed LPS-induced cell migration and invasion. Furthermore, it revealed that GL inhibited the transcription activity of MMP-9 by suppressing the binding activity of NF-κB in the MM-9 promoter. MMP-9 is an important ECM-degrading enzyme and overexpression of MMPs in important of RA-FLSs. LPS can stimulate bovine FLS to secret MMPs, and this induction is regulated at the transcription and translational levels. In this study, LPS treatment of bovine FLS caused an increase in MMP-2 and MMP-9 levels. The increase in MMP-9 expression and secretion was inhibited by ex- vitro. Furthermore, these effects were mimicked by MMP-9 siRNA. These result therefore indicate the the inhibition of LPS-induced bovine FLS invasion by GL occurs primarily by inhibiting MMP-9 expression and activity. Next we analyzed the functional significance of NF-κB transcription of MMP-9 activation in Bovine FLSs. Results from EMSA showed that GL suppressed LPS-induced NF-κB binding to the MMP-9 promotor, as NF-κB regulates transcriptional activation of multiple inflammatory cytokines, we predicted that GL might target NF-κB to suppress MMP-9 transcription by LPS. Myeloid differentiation-factor 88 (MyD88) and TIR-domain containing adaptor protein (TIRAP) are critical proteins in the LPS-induced NF-κB and apoptotic signaling pathways, GL inhibited the expression of TLR4 and MYD88. These results demonstrated that GL suppress LPS-induced MMP-9 expression through the inhibition of the induced TLR4/NFκB signaling pathway. Taken together, our results provide evidence that GL exerts anti-inflammatory effects by inhibition LPS-induced bovine FLSs migration and invasion, and the mechanisms may involve the suppression of TLR4/NFκB –mediated MMP-9 expression. Although further work is needed to clarify the complicated mechanism of GL-induced anti-invasion of bovine FLSs, GL might be used as a further anti-invasion drug with therapeutic efficacy in the treatment of immune-mediated inflammatory disease such as RA.Keywords: glycyrrhizic acid, bovine fibroblast-like synoviocyte, tlr4/nf-κb, metalloproteinase-9
Procedia PDF Downloads 3911 Detailed Degradation-Based Model for Solid Oxide Fuel Cells Long-Term Performance
Authors: Mina Naeini, Thomas A. Adams II
Abstract:
Solid Oxide Fuel Cells (SOFCs) feature high electrical efficiency and generate substantial amounts of waste heat that make them suitable for integrated community energy systems (ICEs). By harvesting and distributing the waste heat through hot water pipelines, SOFCs can meet thermal demand of the communities. Therefore, they can replace traditional gas boilers and reduce greenhouse gas (GHG) emissions. Despite these advantages of SOFCs over competing power generation units, this technology has not been successfully commercialized in large-scale to replace traditional generators in ICEs. One reason is that SOFC performance deteriorates over long-term operation, which makes it difficult to find the proper sizing of the cells for a particular ICE system. In order to find the optimal sizing and operating conditions of SOFCs in a community, a proper knowledge of degradation mechanisms and effects of operating conditions on SOFCs long-time performance is required. The simplified SOFC models that exist in the current literature usually do not provide realistic results since they usually underestimate rate of performance drop by making too many assumptions or generalizations. In addition, some of these models have been obtained from experimental data by curve-fitting methods. Although these models are valid for the range of operating conditions in which experiments were conducted, they cannot be generalized to other conditions and so have limited use for most ICEs. In the present study, a general, detailed degradation-based model is proposed that predicts the performance of conventional SOFCs over a long period of time at different operating conditions. Conventional SOFCs are composed of Yttria Stabilized Zirconia (YSZ) as electrolyte, Ni-cermet anodes, and LaSr₁₋ₓMnₓO₃ (LSM) cathodes. The following degradation processes are considered in this model: oxidation and coarsening of nickel particles in the Ni-cermet anodes, changes in the pore radius in anode, electrolyte, and anode electrical conductivity degradation, and sulfur poisoning of the anode compartment. This model helps decision makers discover the optimal sizing and operation of the cells for a stable, efficient performance with the fewest assumptions. It is suitable for a wide variety of applications. Sulfur contamination of the anode compartment is an important cause of performance drop in cells supplied with hydrocarbon-based fuel sources. H₂S, which is often added to hydrocarbon fuels as an odorant, can diminish catalytic behavior of Ni-based anodes by lowering their electrochemical activity and hydrocarbon conversion properties. Therefore, the existing models in the literature for H₂-supplied SOFCs cannot be applied to hydrocarbon-fueled SOFCs as they only account for the electrochemical activity reduction. A regression model is developed in the current work for sulfur contamination of the SOFCs fed with hydrocarbon fuel sources. The model is developed as a function of current density and H₂S concentration in the fuel. To the best of authors' knowledge, it is the first model that accounts for impact of current density on sulfur poisoning of cells supplied with hydrocarbon-based fuels. Proposed model has wide validity over a range of parameters and is consistent across multiple studies by different independent groups. Simulations using the degradation-based model illustrated that SOFCs voltage drops significantly in the first 1500 hours of operation. After that, cells exhibit a slower degradation rate. The present analysis allowed us to discover the reason for various degradation rate values reported in literature for conventional SOFCs. In fact, the reason why literature reports very different degradation rates, is that literature is inconsistent in definition of how degradation rate is calculated. In the literature, the degradation rate has been calculated as the slope of voltage versus time plot with the unit of voltage drop percentage per 1000 hours operation. Due to the nonlinear profile of voltage over time, degradation rate magnitude depends on the magnitude of time steps selected to calculate the curve's slope. To avoid this issue, instantaneous rate of performance drop is used in the present work. According to a sensitivity analysis, the current density has the highest impact on degradation rate compared to other operating factors, while temperature and hydrogen partial pressure affect SOFCs performance less. The findings demonstrated that a cell running at lower current density performs better in long-term in terms of total average energy delivered per year, even though initially it generates less power than if it had a higher current density. This is because of the dominant and devastating impact of large current densities on the long-term performance of SOFCs, as explained by the model.Keywords: degradation rate, long-term performance, optimal operation, solid oxide fuel cells, SOFCs
Procedia PDF Downloads 133