Search results for: depression anxiety and stress scale of Lovibond
692 Establishing Correlation between Urban Heat Island and Urban Greenery Distribution by Means of Remote Sensing and Statistics Data to Prioritize Revegetation in Yerevan
Authors: Linara Salikhova, Elmira Nizamova, Aleksandra Katasonova, Gleb Vitkov, Olga Sarapulova.
Abstract:
While most European cities conduct research on heat-related risks, there is a research gap in the Caucasus region, particularly in Yerevan, Armenia. This study aims to test the method of establishing a correlation between urban heat islands (UHI) and urban greenery distribution for prioritization of heat-vulnerable areas for revegetation. Armenia has failed to consider measures to mitigate UHI in urban development strategies despite a 2.1°C increase in average annual temperature over the past 32 years. However, planting vegetation in the city is commonly used to deal with air pollution and can be effective in reducing UHI if it prioritizes heat-vulnerable areas. The research focuses on establishing such priorities while considering the distribution of urban greenery across the city. The lack of spatially explicit air temperature data necessitated the use of satellite images to achieve the following objectives: (1) identification of land surface temperatures (LST) and quantification of temperature variations across districts; (2) classification of massifs of land surface types using normalized difference vegetation index (NDVI); (3) correlation of land surface classes with LST. Examination of the heat-vulnerable city areas (in this study, the proportion of individuals aged 75 years and above) is based on demographic data (Census 2011). Based on satellite images (Sentinel-2) captured on June 5, 2021, NDVI calculations were conducted. The massifs of the land surface were divided into five surface classes. Due to capacity limitations, the average LST for each district was identified using one satellite image from Landsat-8 on August 15, 2021. In this research, local relief is not considered, as the study mainly focuses on the interconnection between temperatures and green massifs. The average temperature in the city is 3.8°C higher than in the surrounding non-urban areas. The temperature excess ranges from a low in Norq Marash to a high in Nubarashen. Norq Marash and Avan have the highest tree and grass coverage proportions, with 56.2% and 54.5%, respectively. In other districts, the balance of wastelands and buildings is three times higher than the grass and trees, ranging from 49.8% in Quanaqer-Zeytun to 76.6% in Nubarashen. Studies have shown that decreased tree and grass coverage within a district correlates with a higher temperature increase. The temperature excess is highest in Erebuni, Ajapnyak, and Nubarashen districts. These districts have less than 25% of their area covered with grass and trees. On the other hand, Avan and Norq Marash districts have a lower temperature difference, as more than 50% of their areas are covered with trees and grass. According to the findings, a significant proportion of the elderly population (35%) aged 75 years and above reside in the Erebuni, Ajapnyak, and Shengavit neighborhoods, which are more susceptible to heat stress with an LST higher than in other city districts. The findings suggest that the method of comparing the distribution of green massifs and LST can contribute to the prioritization of heat-vulnerable city areas for revegetation. The method can become a rationale for the formation of an urban greening program.Keywords: heat-vulnerability, land surface temperature, urban greenery, urban heat island, vegetation
Procedia PDF Downloads 72691 Protected Cultivation of Horticultural Crops: Increases Productivity per Unit of Area and Time
Authors: Deepak Loura
Abstract:
The most contemporary method of producing horticulture crops both qualitatively and quantitatively is protected cultivation, or greenhouse cultivation, which has gained widespread acceptance in recent decades. Protected farming, commonly referred to as controlled environment agriculture (CEA), is extremely productive, land- and water-wise, as well as environmentally friendly. The technology entails growing horticulture crops in a controlled environment where variables such as temperature, humidity, light, soil, water, fertilizer, etc. are adjusted to achieve optimal output and enable a consistent supply of them even during the off-season. Over the past ten years, protected cultivation of high-value crops and cut flowers has demonstrated remarkable potential. More and more agricultural and horticultural crop production systems are moving to protected environments as a result of the growing demand for high-quality products by global markets. By covering the crop, it is possible to control the macro- and microenvironments, enhancing plant performance and allowing for longer production times, earlier harvests, and higher yields of higher quality. These shielding features alter the environment of the plant while also offering protection from wind, rain, and insects. Protected farming opens up hitherto unexplored opportunities in agriculture as the liberalised economy and improved agricultural technologies advance. Typically, the revenues from fruit, vegetable, and flower crops are 4 to 8 times higher than those from other crops. If any of these high-value crops are cultivated in protected environments like greenhouses, net houses, tunnels, etc., this profit can be multiplied. Vegetable and cut flower post-harvest losses are extremely high (20–0%), however sheltered growing techniques and year-round cropping can greatly minimize post-harvest losses and enhance yield by 5–10 times. Seasonality and weather have a big impact on the production of vegetables and flowers. The variety of their products results in significant price and quality changes for vegetables. For the application of current technology in crop production, achieving a balance between year-round availability of vegetables and flowers with minimal environmental impact and remaining competitive is a significant problem. The future of agriculture will be protected since population growth is reducing the amount of land that may be held. Protected agriculture is a particularly profitable endeavor for tiny landholdings. Small greenhouses, net houses, nurseries, and low tunnel greenhouses can all be built by farmers to increase their income. Protected agriculture is also aided by the rise in biotic and abiotic stress factors. As a result of the greater productivity levels, these technologies are not only opening up opportunities for producers with larger landholdings, but also for those with smaller holdings. Protected cultivation can be thought of as a kind of precise, forward-thinking, parallel agriculture that covers almost all aspects of farming and is rather subject to additional inspection for technical applicability to circumstances, farmer economics, and market economics.Keywords: protected cultivation, horticulture, greenhouse, vegetable, controlled environment agriculture
Procedia PDF Downloads 76690 Impact of Individual and Neighborhood Social Capital on the Health Status of the Pregnant Women in Riyadh City, Saudi Arabia
Authors: Abrar Almutairi, Alyaa Farouk, Amal Gouda
Abstract:
Background: Social capital is a factor that helps in bonding in a social network. The individual and the neighborhood social capital affect the health status of members of a particular society. In addition, to the influence of social health on the health of the population, social health has a significant effect on women, especially those with pregnancy. Study objective was to assess the impact of the social capital on the health status of pregnant women Design: A descriptive crosssectional correlational design was utilized in this study. Methods: A convenient sample of 210 pregnant women who attended the outpatient antenatal clinicsfor follow-up in King Fahad hospital (Ministry of National Guard Health Affairs/Riyadh) and King Abdullah bin Abdelaziz University Hospital (KAAUH, Ministry of Education /Riyadh) were included in the study. Data was collected using a self-administered questionnaire that was developed by the researchers based on the “World Bank Social Capital Assessment Tool” and SF-36 questionnaire (Short Form Health Survey). The questionnaire consists of 4 parts to collect information regarding socio-demographic data, obstetric and gynecological history, general scale of health status and social activity during pregnancy and the social capital of the study participants, with different types of questions such as multiple-choice questions, polar questions, and Likert scales. Data analysis was carried out by using Statistical Package for the Social Sciences version 23. Descriptive statistic as frequency, percentage, mean, and standard deviation was used to describe the sample characteristics, and the simple linear regression test was used to assess the relationship between the different variables, with level of significance P≤0.005. Result: This study revealed that only 31.1% of the study participants perceived that they have good general health status. About two thirds (62.8%) of the participants have moderate social capital, more than one ten (11.2٪) have high social capital and more than a quarter (26%) of them have low social capital. All dimensions of social capital except for empowerment and political action had positive significant correlations with the health status of pregnant women with P value ranging from 0.001 to 0.010in all dimensions. In general, the social capital showed high statistically significant association with the health status of the pregnant (P=0.002). Conclusion: Less than one third of the study participants had good perceived health status, and the majority of the study participants have moderate social capital, with only about one ten of them perceived that they have high social capital. Finally, neighborhood residency area, family size, sufficiency of income, past medical and surgical history and parity of the study participants were all significantly impacting the assessed health domains of the pregnant women.Keywords: impact, social capital, health status, pregnant women
Procedia PDF Downloads 57689 A Scoping Review of the Relationship Between Oral Health and Wellbeing: The Myth and Reality
Authors: Heba Salama, Barry Gibson, Jennifer Burr
Abstract:
Introduction: It is often argued that better oral health leads to better wellbeing, and the goal of dental care is to improve wellbeing. Notwithstanding, to our best knowledge, there is a lack of evidence to support the relationship between oral health and wellbeing. Aim: The scoping review aims to examine current definitions of health and wellbeing as well as map the evidence to examine the relationship between oral health and wellbeing. Methods: The scoping review followed the Preferred Reporting Items for Systematic Reviews Extension for Scoping Review (PRISMA-ScR). A two-phase search strategy was followed because of the unmanageable number of hits returned. The first phase was to identify how well-being was conceptualised in oral health literacy, and the second phase was to search for extracted keywords. The extracted keywords were searched in four databases: PubMed, CINAHL, PsycINFO, and Web of Science. To limit the number of studies to a manageable amount, the search was limited to the open-access studies that have been published in the last five years (from 2018 to 2022). Results: Only eight studies (0.1%) of the 5455 results met the review inclusion criteria. Most of the included studies defined wellbeing based on the hedonic theory. And the Satisfaction with Life Scale is the most used. Although the research results are inconsistent, it has generally been shown that there is a weak or no association between oral health and wellbeing. Interpretation: The review revealed a very important point about how oral health literature uses loose definitions that have significant implications for empirical research. That results in misleading evidence-based conclusions. According to the review results, improving oral health is not a key factor in improving wellbeing. It appears that investing in oral health care to improve wellbeing is not a top priority to tell policymakers about. This does not imply that there should be no investment in oral health care to improve oral health. That could have an indirect link to wellbeing by eliminating the potential oral health-related barriers to quality of life that could represent the foundation of wellbeing. Limitation: Only the most recent five years (2018–2022), peer-reviewed English-language literature, and four electronic databases were included in the search. These restrictions were put in place to keep the volume of literature at a manageable level. This suggests that some significant studies might have been omitted. Furthermore, the study used a definition of wellbeing that is currently being evolved and might not everyone agrees with it. Conclusion: Whilst it is a ubiquitous argument that oral health is related to wellbeing, and this seems logical, there is little empirical evidence to support this claim. This question, therefore, requires much more detailed consideration. Funding: This project was funded by the Ministry of Higher Education and Scientific Research in Libya and Tripoli University.Keywords: oral health, wellbeing, satisfaction, emotion, quality of life, oral health related quality of life
Procedia PDF Downloads 119688 Is Obesity Associated with CKD-(unknown) in Sri Lanka? A Protocol for a Cross Sectional Survey
Authors: Thaminda Liyanage, Anuga Liyanage, Chamila Kurukulasuriya, Sidath Bandara
Abstract:
Background: The burden of chronic kidney disease (CKD) is growing rapidly around the world, particularly in Asia. Over the last two decades Sri Lanka has experienced an epidemic of CKD with ever growing number of patients pursuing medical care due to CKD and its complications, specially in the “Mahaweli” river basin in north central region of the island nation. This was apparently a new form of CKD which was not attributable to conventional risk factors such as diabetes mellitus, hypertension or infection and widely termed as “CKD-unknown” or “CKDu”. In the past decade a number of small scale studies were conducted to determine the aetiology, prevalence and complications of CKDu in North Central region. These hospital-based studies did not provide an accurate estimate of the problem as merely 10% or less of the people with CKD are aware of their diagnosis even in developed countries with better access to medical care. Interestingly, similar observations were made on the changing epidemiology of obesity in the region but no formal study was conducted to date to determine the magnitude of obesity burden. Moreover, if increasing obesity in the region is associated with CKD epidemic is yet to be explored. Methods: We will conduct an area wide cross sectional survey among all adult residents of the “Mahaweli” development project area 5, in the North Central Province of Sri Lanka. We will collect relevant medical history, anthropometric measurements, blood and urine for hematological and biochemical analysis. We expect a participation rate of 75%-85% of all eligible participants. Participation in the study is voluntary, there will be no incentives provided for participation. Every analysis will be conducted in a central laboratory and data will be stored securely. We will calculate the prevalence of obesity and chronic kidney disease, overall and by stage using total number of participants as the denominator and report per 1000 population. The association of obesity and CKD will be assessed with regression models and will be adjusted for potential confounding factors and stratified by potential effect modifiers where appropriate. Results: This study will provide accurate information on the prevalence of obesity and CKD in the region. Furthermore, this will explore the association between obesity and CKD, although causation may not be confirmed. Conclusion: Obesity and CKD are increasingly recognized as major public health problems in Sri Lanka. Clearly, documenting the magnitude of the problem is the essential first step. Our study will provide this vital information enabling the government to plan a coordinated response to tackle both obesity and CKD in the region.Keywords: BMI, Chronic Kidney Disease, obesity, Sri Lanka
Procedia PDF Downloads 270687 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry
Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood
Abstract:
The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.Keywords: ADV, experimental data, multiple Reynolds number, post-processing
Procedia PDF Downloads 148686 Development and Characterization of Novel Topical Formulation Containing Niacinamide
Authors: Sevdenur Onger, Ali Asram Sagiroglu
Abstract:
Hyperpigmentation is a cosmetically unappealing skin problem caused by an overabundance of melanin in the skin. Its pathophysiology is caused by melanocytes being exposed to paracrine melanogenic stimuli, which can upregulate melanogenesis-related enzymes (such as tyrosinase) and cause melanosome formation. Tyrosinase is linked to the development of melanosomes biochemically, and it is the main target of hyperpigmentation treatment. therefore, decreasing tyrosinase activity to reduce melanosomes has become the main target of hyperpigmentation treatment. Niacinamide (NA) is a natural chemical found in a variety of plants that is used as a skin-whitening ingredient in cosmetic formulations. NA decreases melanogenesis in the skin by inhibiting melanosome transfer from melanocytes to covering keratinocytes. Furthermore, NA protects the skin from reactive oxygen species and acts as a main barrier with the skin, reducing moisture loss by increasing ceramide and fatty acid synthesis. However, it is very difficult for hydrophilic compounds such as NA to penetrate deep into the skin. Furthermore, because of the nicotinic acid in NA, it is an irritant. As a result, we've concentrated on strategies to increase NA skin permeability while avoiding its irritating impacts. Since nanotechnology can affect drug penetration behavior by controlling the release and increasing the period of permanence on the skin, it can be a useful technique in the development of whitening formulations. Liposomes have become increasingly popular in the cosmetics industry in recent years due to benefits such as their lack of toxicity, high penetration ability in living skin layers, ability to increase skin moisture by forming a thin layer on the skin surface, and suitability for large-scale production. Therefore, liposomes containing NA were developed for this study. Different formulations were prepared by varying the amount of phospholipid and cholesterol and examined in terms of particle sizes, polydispersity index (PDI) and pH values. The pH values of the produced formulations were determined to be suitable with the pH value of the skin. Particle sizes were determined to be smaller than 250 nm and the particles were found to be of homogeneous size in the formulation (pdi<0.30). Despite the important advantages of liposomal systems, they have low viscosity and stability for topical use. For these reasons, in this study, liposomal cream formulations have been prepared for easy topical application of liposomal systems. As a result, liposomal cream formulations containing NA have been successfully prepared and characterized. Following the in-vitro release and ex-vivo diffusion studies to be conducted in the continuation of the study, it is planned to test the formulation that gives the most appropriate result on the volunteers after obtaining the approval of the ethics committee.Keywords: delivery systems, hyperpigmentation, liposome, niacinamide
Procedia PDF Downloads 112685 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.Keywords: marine sedimentology, seabed map, sediment classification, world ocean
Procedia PDF Downloads 232684 Bioinformatic Strategies for the Production of Glycoproteins in Algae
Authors: Fadi Saleh, Çığdem Sezer Zhmurov
Abstract:
Biopharmaceuticals represent one of the wildest developing fields within biotechnology, and the biological macromolecules being produced inside cells have a variety of applications for therapies. In the past, mammalian cells, especially CHO cells, have been employed in the production of biopharmaceuticals. This is because these cells can achieve human-like completion of PTM. These systems, however, carry apparent disadvantages like high production costs, vulnerability to contamination, and limitations in scalability. This research is focused on the utilization of microalgae as a bioreactor system for the synthesis of biopharmaceutical glycoproteins in relation to PTMs, particularly N-glycosylation. The research points to a growing interest in microalgae as a potential substitute for more conventional expression systems. A number of advantages exist in the use of microalgae, including rapid growth rates, the lack of common human pathogens, controlled scalability in bioreactors, and the ability of some PTMs to take place. Thus, the potential of microalgae to produce recombinant proteins with favorable characteristics makes this a promising platform in order to produce biopharmaceuticals. The study focuses on the examination of the N-glycosylation pathways across different species of microalgae. This investigation is important as N-glycosylation—the process by which carbohydrate groups are linked to proteins—profoundly influences the stability, activity, and general performance of glycoproteins. Additionally, bioinformatics methodologies are employed to explain the genetic pathways implicated in N-glycosylation within microalgae, with the intention of modifying these organisms to produce glycoproteins suitable for human consumption. In this way, the present comparative analysis of the N-glycosylation pathway in humans and microalgae can be used to bridge both systems in order to produce biopharmaceuticals with humanized glycosylation profiles within the microalgal organisms. The results of the research underline microalgae's potential to help improve some of the limitations associated with traditional biopharmaceutical production systems. The study may help in the creation of a cost-effective and scale-up means of producing quality biopharmaceuticals by modifying microalgae genetically to produce glycoproteins with N-glycosylation that is compatible with humans. Improvements in effectiveness will benefit biopharmaceutical production and the biopharmaceutical sector with this novel, green, and efficient expression platform. This thesis, therefore, is thorough research into the viability of microalgae as an efficient platform for producing biopharmaceutical glycoproteins. Based on the in-depth bioinformatic analysis of microalgal N-glycosylation pathways, a platform for their engineering to produce human-compatible glycoproteins is set out in this work. The findings obtained in this research will have significant implications for the biopharmaceutical industry by opening up a new way of developing safer, more efficient, and economically more feasible biopharmaceutical manufacturing platforms.Keywords: microalgae, glycoproteins, post-translational modification, genome
Procedia PDF Downloads 24683 Sprinting Beyond Sexism and Gender Stereotypes: Indian Women Fans' Experiences in the Sports Fandom
Authors: Siddhi Deshpande, Jo Jo Chacko Eapen
Abstract:
Despite almost half of India’s female population engages in watching sports, their experiences in the sports fandom are concealed by ‘traditional masculinity,’ leading to potential exclusion and harassment. To explore these experiences in-depth, this qualitative study aims to understand what coping strategies Indian women fans employ, to sustain their team identification. Employing criterion sampling, participants were screened using The Sports Spectators Identification Scale (SSIS) to assess team identification and a Brief Sexism Questionnaire to confirm participants’ experience with sexism as it aligns with the purpose of the study. The participants were Indian women who had been following any sport for more than eight years, were fluent in English, and were not professionals in Sports. Ten highly identified fans with gendered experiences were recruited for one-on-one semi-structured, in-depth interviews. The data was analyzed using Interpretive Phenomenological Analysis (IPA) to understand the lived-in experiences of women fans experiencing sexism and gender stereotypes, revealing superordinate themes of (1) Ontogenesis and Emotional Investment; (2) Gendered Expectations and Sexism; (3) Coping Strategies and Resilience; (4) Identity, Femininity, Empowerment; (5) Advocacy for Equality and Inclusivity. The findings reflect that Indian women fans experience social exclusion, harassment, sexualization, and commodification, in both online and offline fandoms, where they are disproportionately targeted with threats, misogynistic comments, and attraction-based assumptions, questioning their ‘authenticity’ as fans due to their gender. Women fans interchange between proactive strategies of assertiveness, humor, and knowledge demonstration with defensive strategies of selective engagement, self-regulatory censorship, and desensitization to deal with sexism. In this interplay, the integration of women’s ‘fan identity’ with their self-concept showcases how being a sports fan adds meaning to their lives, despite the constant scrutiny in a male-dominated space, reflecting that femininity and sports should coexist. As a result, they find refuge in female fan communities due to their similar experiences in the fandom and advocate for an equal and inclusive environment where sports are above gender, and not the other way around. A key practical implication of this research is enabling sports organizations to develop inclusive fan engagement policies that actively encourage female fan participation. This includes sensitizing stadium staff and security personnel, promoting gender-neutral language, and, most importantly, establishing safety protocols to protect female fans from adverse experiences in the fandom.Keywords: coping strategies, female sports fans, femininity, gendered experiences, team identification
Procedia PDF Downloads 46682 Towards a More Inclusive Society: A Study on the Assimilation and Integration of the Migrant Children in Kerala
Authors: Arun Perumbilavil Anand
Abstract:
For the past few years, the state of Kerala has been witnessing a large inflow of migrant workers from other states of the country, which emerged as a result of demographic transition and Gulf emigration. The in-migration patterns in Kerala have changed over the time with the migrants having a higher residence history bringing their families to the state, thereby making the process more complicated and divergent in its approach. These developments have led to an increase in the young migrant population at least in some parts of the state, which has opened up doubts and questions related to their future in the host society. At this juncture, the study ponders into the factors that are associated with the assimilation and wellbeing of migrant children in the society of Kerala. As one of the objectives, the study also analyzed the influence and role played by the educational institutions (both public and private) in meeting the needs and aspirations of both the children and their parents. The study gains significance as it tries to identify various impediments that hinder the cognitive skill formation and behaviour patterns of the migrant children in the host society. Data and Methodology: The study is based on the primary data collected through a series of interviews and interactions held with parents, children, and teachers of different educational institutions, including both public and private. The primary survey also made use of research techniques like observation, in-depth interviews, and case study method. The study was conducted in schools in the Kanjikode area of the Palakkad district in Kerala. The findings of the study are on the basis of a survey conducted in four schools and 40 migrant children. Findings: The study found that majority of the children have wholly integrated and assimilated into the host society. The influence of the peer group was quite visible in giving stimulus to the assimilation process. Most of the children do not have any emotional or cultural sentiments attached to their state of origin, and they consider Kerala as their ‘home state’ and the local language (Malayalam) as their ‘mother tongue'. The study could also find that the existing education system in the host society fails to meet the needs and aspirations of migrants as well as that of their children. On a comparative scale, to some extent, private schools have succeeded in fulfiling the special requirements of the migrant children. An interesting point that the study could pinpoint at is that the children of the migrants show better health conditions and wellbeing than compared to the natives, which is usually addressed as an epidemiologic paradox. As a concluding remark, the study recommends the inclusion concept of inclusive education into the education system of the state with giving due emphasis on those who are at higher risk of being excluded or marginalized, along with fostering increased interaction between diverse groups.Keywords: assimilation, Kerala, migrant children, well-being
Procedia PDF Downloads 170681 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution
Procedia PDF Downloads 99680 A Comparison and Discussion of Modern Anaesthetic Techniques in Elective Lower Limb Arthroplasties
Authors: P. T. Collett, M. Kershaw
Abstract:
Introduction: The discussion regarding which method of anesthesia provides better results for lower limb arthroplasty is a continuing debate. Multiple meta-analysis has been performed with no clear consensus. The current recommendation is to use neuraxial anesthesia for lower limb arthroplasty; however, the evidence to support this decision is weak. The Enhanced Recovery After Surgery (ERAS) society has recommended, either technique can be used as part of a multimodal anesthetic regimen. A local study was performed to see if the current anesthetic practice correlates with the current recommendations and to evaluate the efficacy of the different techniques utilized. Method: 90 patients who underwent total hip or total knee replacements at Nevill Hall Hospital between February 2019 to July 2019 were reviewed. Data collected included the anesthetic technique, day one opiate use, pain score, and length of stay. The data was collected from anesthetic charts, and the pain team follows up forms. Analysis: The average of patients undergoing lower limb arthroplasty was 70. Of those 83% (n=75) received a spinal anaesthetic and 17% (n=15) received a general anaesthetic. For patients undergoing knee replacement under general anesthetic the average day, one pain score was 2.29 and 1.94 if a spinal anesthetic was performed. For hip replacements, the scores were 1.87 and 1.8, respectively. There was no statistical significance between these scores. Day 1 opiate usage was significantly higher in knee replacement patients who were given a general anesthetic (45.7mg IV morphine equivalent) vs. those who were operated on under spinal anesthetic (19.7mg). This difference was not noticeable in hip replacement patients. There was no significant difference in length of stay between the two anesthetic techniques. Discussion: There was no significant difference in the day one pain score between the patients who received a general or spinal anesthetic for either knee or hip replacements. The higher pain scores in the knee replacement group overall are consistent with this being a more painful procedure. This is a small patient population, which means any difference between the two groups is unlikely to be representative of a larger population. The pain scale has 4 points, which means it is difficult to identify a significant difference between pain scores. Conclusion: There is currently little standardization between the different anesthetic approaches utilized in Nevill Hall Hospital. This is likely due to the lack of adherence to a standardized anesthetic regimen. In accordance with ERAS recommends a standard anesthetic protocol is a core component. The results of this study and the guidance from the ERAS society will support the implementation of a new health board wide ERAS protocol.Keywords: anaesthesia, orthopaedics, intensive care, patient centered decision making, treatment escalation
Procedia PDF Downloads 127679 Experimental Study of Impregnated Diamond Bit Wear During Sharpening
Authors: Rui Huang, Thomas Richard, Masood Mostofi
Abstract:
The lifetime of impregnated diamond bits and their drilling efficiency are in part governed by the bit wear conditions, not only the extent of the diamonds’ wear but also their exposure or protrusion out of the matrix bonding. As much as individual diamonds wear, the bonding matrix does also wear through two-body abrasion (direct matrix-rock contact) and three-body erosion (cuttings trapped in the space between rock and matrix). Although there is some work dedicated to the study of diamond bit wear, there is still a lack of understanding on how matrix erosion and diamond exposure relate to the bit drilling response and drilling efficiency, as well as no literature on the process that governs bit sharpening a procedure commonly implemented by drillers when the extent of diamond polishing yield extremely low rate of penetration. The aim of this research is (i) to derive a correlation between the wear state of the bit and the drilling performance but also (ii) to gain a better understanding of the process associated with tool sharpening. The research effort combines specific drilling experiments and precise mapping of the tool-cutting face (impregnated diamond bits and segments). Bit wear is produced by drilling through a rock sample at a fixed rate of penetration for a given period of time. Before and after each wear test, the bit drilling response and thus efficiency is mapped out using a tailored design experimental protocol. After each drilling test, the bit or segment cutting face is scanned with an optical microscope. The test results show that, under the fixed rate of penetration, diamond exposure increases with drilling distance but at a decreasing rate, up to a threshold exposure that corresponds to the optimum drilling condition for this feed rate. The data further shows that the threshold exposure scale with the rate of penetration up to a point where exposure reaches a maximum beyond which no more matrix can be eroded under normal drilling conditions. The second phase of this research focuses on the wear process referred as bit sharpening. Drillers rely on different approaches (increase feed rate or decrease flow rate) with the aim of tearing worn diamonds away from the bit matrix, wearing out some of the matrix, and thus exposing fresh sharp diamonds and recovering a higher rate of penetration. Although a common procedure, there is no rigorous methodology to sharpen the bit and avoid excessive wear or bit damage. This paper aims to gain some insight into the mechanisms that accompany bit sharpening by carefully tracking diamond fracturing, matrix wear, and erosion and how they relate to drilling parameters recorded while sharpening the tool. The results show that there exist optimal conditions (operating parameters and duration of the procedure) for sharpening that minimize overall bit wear and that the extent of bit sharpening can be monitored in real-time.Keywords: bit sharpening, diamond exposure, drilling response, impregnated diamond bit, matrix erosion, wear rate
Procedia PDF Downloads 99678 Flipped Classroom in a European Public Health Program: The Need for Students' Self-Directness
Authors: Nynke de Jong, Inge G. P. Duimel-Peeters
Abstract:
The flipped classroom as an instructional strategy and a type of blended learning that reverses the traditional learning environment by delivering instructional content, off- and online, in- and outside the classroom, has been implemented in a 4-weeks module focusing on ageing in Europe at the Maastricht University. The main aim regarding the organization of this module was implementing flipped classroom-principles in order to create meaningful learning opportunities, while educational technologies are used to deliver content outside of the classroom. Technologies used in this module were an online interactive real time lecture from England, two interactive face-to-face lectures with visual supports, one group session including role plays and team-based learning meetings. The cohort of 2015-2016, using educational technologies, was compared with the cohort of 2014-2015 on module evaluation such as organization and instructiveness of the module, who studied the same content, although conforming the problem-based educational strategy, i.e. educational base of the Maastricht University. The cohort of 2015-2016 with its specific organization, was also more profound evaluated on outcomes as (1) experienced duration of the lecture by students, (2) experienced content of the lecture, (3) experienced the extent of the interaction and (4) format of lecturing. It was important to know how students reflected on duration and content taken into account their background knowledge so far, in order to distinguish between sufficient enough regarding prior knowledge and therefore challenging or not fitting into the course. For the evaluation, a structured online questionnaire was used, whereby above mentioned topics were asked for to evaluate by scoring them on a 4-point Likert scale. At the end, there was room for narrative feedback so that interviewees could express more in detail, if they wanted, what they experienced as good or not regarding the content of the module and its organization parts. Eventually, the response rate of the evaluation was lower than expected (54%), however, due to written feedback and exam scores, we dare to state that it gives a good and reliable overview that encourages to work further on it. Probably, the response rate may be explained by the fact that resit students were included as well, and that there maybe is too much evaluation as some time points in the program. However, overall students were excited about the organization and content of the module, but the level of self-directed behavior, necessary for this kind of educational strategy, was too low. They need to be more trained in self-directness, therefore the module will be simplified in 2016-2017 with more clear and fewer topics and extra guidance (step by step procedure). More specific information regarding the used technologies will be explained at the congress, as well as the outcomes (min and max rankings, mean and standard deviation).Keywords: blended learning, flipped classroom, public health, self-directness
Procedia PDF Downloads 219677 Curriculum Check in Industrial Design, Based on Knowledge Management in Iran Universities
Authors: Maryam Mostafaee, Hassan Sadeghi Naeini, Sara Mostowfi
Abstract:
Today’s Knowledge management (KM), plays an important role in organizations. Basically, knowledge management is in the relation of using it for taking advantage of work forces in an organization for forwarding the goals and demand of that organization used at the most. The purpose of knowledge management is not only to manage existing documentation, information, and Data through an organization, but the most important part of KM is to control most important and key factor of those information and Data. For sure it is to chase the information needed for the employees in the right time of needed to take from genuine source for bringing out the best performance and result then in this matter the performance of organization will be at most of it. There are a lot of definitions over the objective of management released. Management is the science that in force the accurate knowledge with repeating to the organization to shape it and take full advantages for reaching goals and targets in the organization to be used by employees and users, but the definition of Knowledge based on Kalinz dictionary is: Facts, emotions or experiences known by man or group of people is ‘ knowledge ‘: Based on the Merriam Webster Dictionary: the act or skill of controlling and making decision about a business, department, sport team, etc, based on the Oxford Dictionary: Efficient handling of information and resources within a commercial organization, and based on the Oxford Dictionary: The art or process of designing manufactured products: the scale is a beautiful work of industrial design. When knowledge management performed executive in universities, discovery and create a new knowledge be facilitated. Make procedures between different units for knowledge exchange. College's officials and employees understand the importance of knowledge for University's success and will make more efforts to prevent the errors. In this strategy, is explored factors and affective trends and manage of it in University. In this research, Iranian universities for a time being analyzed that over usage of knowledge management, how they are behaving and having understood this matter: 1. Discovery of knowledge management in Iranian Universities, 2. Transferring exciting knowledge between faculties and unites, 3. Participate of employees for getting and using and transferring knowledge, 4.The accessibility of valid sources, 5. Researching over factors and correct processes in the university. We are pointing in some examples that we have already analyzed which is: -Enabling better and faster decision-making, -Making it easy to find relevant information and resources, -Reusing ideas, documents, and expertise, -Avoiding redundant effort. Consequence: It is found that effectiveness of knowledge management in the Industrial design field is low. Based on filled checklist by Education officials and professors in universities, and coefficient of effectiveness Calculate, knowledge management could not get the right place.Keywords: knowledge management, industrial design, educational curriculum, learning performance
Procedia PDF Downloads 370676 Oil and Proteins of Sardine (Sardina Pilchardus) Compared with Casein or Mixture of Vegetable Oils Improves Dyslipidemia and Reduces Inflammation and Oxidative Stress in Hypercholesterolemic and Obese Rats
Authors: Khelladi Hadj Mostefa, Krouf Djamil, Taleb-Dida Nawel
Abstract:
Background: Obesity results from a prolonged imbalance between energy intake and energy expenditure, as depending on basal metabolic rate. Oils and proteins from sea have important therapeutic (such as obesity and hypercholesterolemia) and antioxidant effects. Sardine are a widely consumed fish in the Mediterranean region. Its consumption provides humans with various nutrients such as oils (rich in omega 3 plyunsaturated fatty acids)) and proteins. Methods: Sardine oil (SO) and sardine proteins (SP) were extracted and purified. Mixture of vegetable oils (olive-walnut-sunflower) were prepared from oils produced in Algeria. Eighteen wistar rats are fed a high fat diet enriched with 1% cholesterol for 30 days to induce obesity and hypercholesterolemia. The rats are divided into 3 groups. The first group consumes 20% sardine protein combined with 5% sardine oil (38% SFA (saturated fatty acids), 31% MIFA (monounsaturated fatty acids) and 31% PIFA (polyunsaturated fatty acids)) (SPso). The second group consumes 20% sardine protein combined with 5% of a mixture of vegetable oils (VO) containing 13% SFA, 58% MIFA and 29% PIFA (PSvo), and the third group consuming 20% casein combined with 5% of the mixture of vegetable oils and serves as a semi-synthetic reference (CASvo). Body weights and glycaemia are measured weekly After 28 days of experimentation, the rats are sacrificed, the blood and the liver removed. Serum assays of total cholesterol (TC) and triglycerides (TG) were performed by enzymatic colorimetric methods. Evaluation of lipid peroxidation was performed by assaying thiobarbituric acid reactive species (TBARS) and hydroperoxides values. The protein oxidation was performed by assaying carbonyl derivatives values. Finally, evaluation of antioxidant defense is made by measuring the activity of antioxidant enzymes, the superoxide dismutase (SOD) and the catalase (CAT).Results: After 28 days, the body weight (BW) of the rats increased significantly in SPso and SPvo groups compared to CAS group, by +11% and 7%, respectively. Cholesterolemia (TC) increased significantly in the SPso and SPvo groups compared to the CAS group (P<0.01), while triglyceridemia (TG) decreased significantly in the SPso group compared to SPvo and CAS groups (P<0.01). Albumin (marker of inflammation) increased in the PSs group compared to SPvo and CAS groups by +35% and +13%, respectively. The serum TBARS levels are -40% lower in SPso group compared to SPvo group, and they are -80% and -76% lower in SPso compared to SPvo and CAS groups, respectively. The level of carbonyls derivatives in the serum and liver are significantly reduced in the SPso group compared to the SPvo and CAS groups. Superoxide dismutase (SOD) activity decreased in liver of SPso group compared to SPvo group (P<0.01). While that of CAT is increased in liver tissue of SPso group compared to SPvo group (P<0.01). Conclusion: Sardine oil combined with sardine protein has a hypotriglyceridemic effect, reduces body weight, attenuates inflammation and seems to protect against lipid peroxidation and protein oxidation and increases antioxidant defense in hypercholesterolemic and obese rats. This could be in favor of a protective effect against obesity and cardiovascular diseases.Keywords: rat, obesity, hypercholesterolemia, sardine protein, sardine oil, vegetable oils mixture, lipid peroxidation, protein oxidation, antioxidant defense
Procedia PDF Downloads 66675 The Impact of Formulate and Implementation Strategy for an Organization to Better Financial Consequences in Malaysian Private Hospital
Authors: Naser Zouri
Abstract:
Purpose: Measures of formulate and implementation strategy shows amount of product rate-market based strategic management category such as courtesy, competence, and compliance to reach the high loyalty of financial ecosystem. Despite, it solves the market place error intention to fair trade organization. Finding: Finding shows the ability of executives’ level of management to motivate and better decision-making to solve the treatments in business organization. However, it made ideal level of each interposition policy for a hypothetical household. Methodology/design. Style of questionnaire about the data collection was selected to survey of both pilot test and real research. Also, divide of questionnaire and using of Free Scale Semiconductor`s between the finance employee was famous of this instrument. Respondent`s nominated basic on non-probability sampling such as convenience sampling to answer the questionnaire. The way of realization costs to performed the questionnaire divide among the respondent`s approximately was suitable as a spend the expenditure to reach the answer but very difficult to collect data from hospital. However, items of research survey was formed of implement strategy, environment, supply chain, employee from impact of implementation strategy on reach to better financial consequences and also formulate strategy, comprehensiveness strategic design, organization performance from impression on formulate strategy and financial consequences. Practical Implication: Dynamic capability approach of formulate and implement strategy focuses on the firm-specific processes through which firms integrate, build, or reconfigure resources valuable for making a theoretical contribution. Originality/ value of research: Going beyond the current discussion, we show that case studies have the potential to extend and refine theory. We present new light on how dynamic capabilities can benefit from case study research by discovering the qualifications that shape the development of capabilities and determining the boundary conditions of the dynamic capabilities approach. Limitation of the study :Present study also relies on survey of methodology for data collection and the response perhaps connection by financial employee was difficult to responds the question because of limitation work place.Keywords: financial ecosystem, loyalty, Malaysian market error, dynamic capability approach, rate-market, optimization intelligence strategy, courtesy, competence, compliance
Procedia PDF Downloads 304674 Artificial Intelligence in Management Simulators
Authors: Nuno Biga
Abstract:
Artificial Intelligence (AI) has the potential to transform management into several impactful ways. It allows machines to interpret information to find patterns in big data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-driven decision making. The introduction of an 'artificial brain' in organization also enables learning through complex information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) sensitive to context, that provides users useful suggestions to pursue the following operations such as: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time existing bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of data is powered in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" (VA) that players can use during the Game. Each participant in the VA permanently asks himself about the decisions he should make during the game to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making, through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, as they gain a better understanding of the issues along time, reflect on good practice and rely on their own experience, capability and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator designated as “Serious Game Controller” (SGC) is responsible for supporting the players with further analysis. The recommended actions by the SGC may differ or be similar to the ones previously provided by the VA, ensuring a higher degree of robustness in decision-making. Additionally, all the information should be jointly analyzed and assessed by each player, who are expected to add “Emotional Intelligence”, an essential component absent from the machine learning process.Keywords: artificial intelligence, gamification, key performance indicators, machine learning, management simulators, serious games, virtual assistant
Procedia PDF Downloads 104673 Transcriptional Differences in B cell Subpopulations over the Course of Preclinical Autoimmunity Development
Authors: Aleksandra Bylinska, Samantha Slight-Webb, Kevin Thomas, Miles Smith, Susan Macwana, Nicolas Dominguez, Eliza Chakravarty, Joan T. Merrill, Judith A. James, Joel M. Guthridge
Abstract:
Background: Systemic Lupus Erythematosus (SLE) is an interferon-related autoimmune disease characterized by B cell dysfunction. One of the main hallmarks is a loss of tolerance to self-antigens leading to increased levels of autoantibodies against nuclear components (ANAs). However, up to 20% of healthy ANA+ individuals will not develop clinical illness. SLE is more prevalent among women and minority populations (African, Asian American and Hispanics). Moreover, African Americans have a stronger interferon (IFN) signature and develop more severe symptoms. The exact mechanisms involved in ethnicity-dependent B cell dysregulation and the progression of autoimmune disease from ANA+ healthy individuals to clinical disease remains unclear. Methods: Peripheral blood mononuclear cells (PBMCs) from African (AA) and European American (EA) ANA- (n=12), ANA+ (n=12) and SLE (n=12) individuals were assessed by multimodal scRNA-Seq/CITE-Seq methods to examine differential gene signatures in specific B cell subsets. Library preparation was done with a 10X Genomics Chromium according to established protocols and sequenced on Illumina NextSeq. The data were further analyzed for distinct cluster identification and differential gene signatures in the Seurat package in R and pathways analysis was performed using Ingenuity Pathways Analysis (IPA). Results: Comparing all subjects, 14 distinct B cell clusters were identified using a community detection algorithm and visualized with Uniform Manifold Approximation Projection (UMAP). The proportion of each of those clusters varied by disease status and ethnicity. Transitional B cells trended higher in ANA+ healthy individuals, especially in AA. Ribonucleoprotein high population (HNRNPH1 elevated, heterogeneous nuclear ribonucleoprotein, RNP-Hi) of proliferating Naïve B cells were more prevalent in SLE patients, specifically in EA. Interferon-induced protein high population (IFIT-Hi) of Naive B cells are increased in EA ANA- individuals. The proportion of memory B cells and plasma cells clusters tend to be expanded in SLE patients. As anticipated, we observed a higher signature of cytokine-related pathways, especially interferon, in SLE individuals. Pathway analysis among AA individuals revealed an NRF2-mediated Oxidative Stress response signature in the transitional B cell cluster, not seen in EA individuals. TNFR1/2 and Sirtuin Signaling pathway genes were higher in AA IFIT-Hi Naive B cells, whereas they were not detected in EA individuals. Interferon signaling was observed in B cells in both ethnicities. Oxidative phosphorylation was found in age-related B cells (ABCs) for both ethnicities, whereas Death Receptor Signaling was found only in EA patients in these cells. Interferon-related transcription factors were elevated in ABCs and IFIT-Hi Naive B cells in SLE subjects of both ethnicities. Conclusions: ANA+ healthy individuals have altered gene expression pathways in B cells that might drive apoptosis and subsequent clinical autoimmune pathogenesis. Increases in certain regulatory pathways may delay progression to SLE. Further, AA individuals have more elevated activation pathways that may make them more susceptible to SLE. Procedia PDF Downloads 175672 Effect of Climate Changing Pattern on Aquatic Biodiversity of Bhimtal Lake at Kumaun Himalaya (India)
Authors: Davendra S. Malik
Abstract:
Bhimtal lake is located between 290 21’ N latitude and 790 24’ E longitude, at an elevation of 1332m above mean sea level in the Kumaun region of Uttarakhand of Indian subcontinent. The lake surface area is decreasing in water area, depth level in relation to ecological and biological characteristics due to climatic variations, invasive land use pattern, degraded forest zones and changed agriculture pattern in lake catchment basin. The present study is focused on long and short term effects of climate change on aquatic biodiversity and productivity of Bhimtal lake. The meteorological data of last fifteen years of Bhimtal lake catchment basin revealed that air temperature has been increased 1.5 to 2.1oC in summer, 0.2 to 0.8 C in winter, relative humidity increased 4 to 6% in summer and rainfall pattern changed erratically in rainy seasons. The surface water temperature of Bhimtal lake showed an increasing pattern as 0.8 to 2.6 C, pH value decreased 0.5 to 0.2 in winter and increased 0.4 to 0.6 in summer. Dissolved oxygen level in lake showed a decreasing trend as 0.7 to 0.4mg/l in winter months. The mesotrophic nature of Bhimtal lake is changing towards eutrophic conditions and contributed for decreasing biodiversity. The aquatic biodiversity of Bhimtal lake consisted mainly phytoplankton, zooplankton, benthos and fish species. In the present study, a total of 5 groups of phytoplankton, 3 groups of zooplankton, 11 groups of benthos and 15 fish species were recorded from Bhimtal lake. The comparative data of biodiversity of Bhimtal lake since January, 2000 indicated the changing pattern of phytoplankton biomass were decreasing as 1.99 and 1.08% of Chlorophyceae and Bacilleriophyceae families respectively. The biomass of Cynophyceae was increasing as 0.45% and contributing the algal blooms during summer season in lake. The biomass of zooplankton and benthos were found decreasing in winter season and increasing during summer season. The endemic fish species (18 no.) were found in year 2000-05, as while the fish species (15 no.) were recorded in present study. The relative fecundity of major fish species were observed decreasing trends during their breeding periods in lake. The natural and anthropogenic factors were identified as ecological threats for existing aquatic biodiversity of Bhimtal lake. The present research paper emphasized on the effect of changing pattern of different climatic variables on species composition, biomass of phytoplankton, zooplankton, benthos, and fishes in Bhimtal lake of Kumaun region. The present research data will be contributed significantly to assess the changing pattern of aquatic biodiversity and productivity of Bhimtal lake with different time scale.Keywords: aquatic biodiversity, Bhimtal lake, climate change, lake ecology
Procedia PDF Downloads 222671 Teachers Engagement to Teaching: Exploring Australian Teachers’ Attribute Constructs of Resilience, Adaptability, Commitment, Self/Collective Efficacy Beliefs
Authors: Lynn Sheridan, Dennis Alonzo, Hoa Nguyen, Andy Gao, Tracy Durksen
Abstract:
Disruptions to teaching (e.g., COVID-related) have increased work demands for teachers. There is an opportunity for research to explore evidence-informed steps to support teachers. Collective evidence informs data on teachers’ personal attributes (e.g., self-efficacy beliefs) in the workplace are seen to promote success in teaching and support teacher engagement. Teacher engagement plays a role in students’ learning and teachers’ effectiveness. Engaged teachers are better at overcoming work-related stress, burnout and are more likely to take on active roles. Teachers’ commitment is influenced by a host of personal (e.g., teacher well-being) and environmental factors (e.g., job stresses). The job demands-resources model provided a conceptual basis for examining how teachers’ well-being, and is influenced by job demands and job resources. Job demands potentially evoke strain and exceed the employee’s capability to adapt. Job resources entail what the job offers to individual teachers (e.g., organisational support), helping to reduce job demands. The application of the job demands-resources model involves gathering an evidence-base of and connection to personal attributes (job resources). The study explored the association between constructs (resilience, adaptability, commitment, self/collective efficacy) and a teacher’s engagement with the job. The paper sought to elaborate on the model and determine the associations between key constructs of well-being (resilience, adaptability), commitment, and motivation (self and collective-efficacy beliefs) to teachers’ engagement in teaching. Data collection involved online a multi-dimensional instrument using validated items distributed from 2020-2022. The instrument was designed to identify construct relationships. The participant number was 170. Data Analysis: The reliability coefficients, means, standard deviations, skewness, and kurtosis statistics for the six variables were completed. All scales have good reliability coefficients (.72-.96). A confirmatory factor analysis (CFA) and structural equation model (SEM) were performed to provide measurement support and to obtain latent correlations among factors. The final analysis was performed using structural equation modelling. Several fit indices were used to evaluate the model fit, including chi-square statistics and root mean square error of approximation. The CFA and SEM analysis was performed. The correlations of constructs indicated positive correlations exist, with the highest found between teacher engagement and resilience (r=.80) and the lowest between teacher adaptability and collective teacher efficacy (r=.22). Given the associations; we proceeded with CFA. The CFA yielded adequate fit: CFA fit: X (270, 1019) = 1836.79, p < .001, RMSEA = .04, and CFI = .94, TLI = .93 and SRMR = .04. All values were within the threshold values, indicating a good model fit. Results indicate that increasing teacher self-efficacy beliefs will increase a teacher’s level of engagement; that teacher ‘adaptability and resilience are positively associated with self-efficacy beliefs, as are collective teacher efficacy beliefs. Implications for school leaders and school systems: 1. investing in increasing teachers’ sense of efficacy beliefs to manage work demands; 2. leadership approaches can enhance teachers' adaptability and resilience; and 3. a culture of collective efficacy support. Preparing teachers for now and in the future offers an important reminder to policymakers and school leaders on the importance of supporting teachers’ personal attributes when faced with the challenging demands of the job.Keywords: collective teacher efficacy, teacher self-efficacy, job demands, teacher engagement
Procedia PDF Downloads 124670 Oscillating Water Column Wave Energy Converter with Deep Water Reactance
Authors: William C. Alexander
Abstract:
The oscillating water column (OSC) wave energy converter (WEC) with deep water reactance (DWR) consists of a large hollow sphere filled with seawater at the base, referred to as the ‘stabilizer’, a hollow cylinder at the top of the device, with a said cylinder having a bottom open to the sea and a sealed top save for an orifice which leads to an air turbine, and a long, narrow rod connecting said stabilizer with said cylinder. A small amount of ballast at the bottom of the stabilizer and a small amount of floatation in the cylinder keeps the device upright in the sea. The floatation is set such that the mean water level is nominally halfway up the cylinder. The entire device is loosely moored to the seabed to keep it from drifting away. In the presence of ocean waves, seawater will move up and down within the cylinder, producing the ‘oscillating water column’. This gives rise to air pressure within the cylinder alternating between positive and negative gauge pressure, which in turn causes air to alternately leave and enter the cylinder through said top-cover situated orifice. An air turbine situated within or immediately adjacent to said orifice converts the oscillating airflow into electric power for transport to shore or elsewhere by electric power cable. Said oscillating air pressure produces large up and down forces on the cylinder. Said large forces are opposed through the rod to the large mass of water retained within the stabilizer, which is located deep enough to be mostly free of any wave influence and which provides the deepwater reactance. The cylinder and stabilizer form a spring-mass system which has a vertical (heave) resonant frequency. The diameter of the cylinder largely determines the power rating of the device, while the size (and water mass within) of the stabilizer determines said resonant frequency. Said frequency is chosen to be on the lower end of the wave frequency spectrum to maximize the average power output of the device over a large span of time (such as a year). The upper portion of the device (the cylinder) moves laterally (surge) with the waves. This motion is accommodated with minimal loading on the said rod by having the stabilizer shaped like a sphere, allowing the entire device to rotate about the center of the stabilizer without rotating the seawater within the stabilizer. A full-scale device of this type may have the following dimensions. The cylinder may be 16 meters in diameter and 30 meters high, the stabilizer 25 meters in diameter, and the rod 55 meters long. Simulations predict that this will produce 1,400 kW in waves of 3.5-meter height and 12 second period, with a relatively flat power curve between 5 and 16 second wave periods, as will be suitable for an open-ocean location. This is nominally 10 times higher power than similar-sized WEC spar buoys as reported in the literature, and the device is projected to have only 5% of the mass per unit power of other OWC converters.Keywords: oscillating water column, wave energy converter, spar bouy, stabilizer
Procedia PDF Downloads 107669 The Territorial Expression of Religious Identity: A Case Study of Catholic Communities
Authors: Margarida Franca
Abstract:
The influence of the ‘cultural turn’ movement and the consequent deconstruction of scientific thought allowed geography and other social sciences to open or deepen their studies based on the analysis of multiple identities, on singularities, on what is particular or what marks the difference between individuals. In the context of postmodernity, the geography of religion has gained a favorable scientific, thematic and methodological focus for the qualitative and subjective interpretation of various religious identities, sacred places, territories of belonging, religious communities, among others. In the context of ‘late modernity’ or ‘net modernity’, sacred places and the definition of a network of sacred territories allow believers to attain the ‘ontological security’. The integration on a religious group or a local community, particularly a religious community, allows human beings to achieve a sense of belonging, familiarity or solidarity and to overcome, in part, some of the risks or fears that society has discovered. The importance of sacred places comes not only from their inherent characteristics (eg transcendent, mystical and mythical, respect, intimacy and abnegation), but also from the possibility of adding and integrating members of the same community, creating bonds of belonging, reference and individual and collective memory. In addition, the formation of different networks of sacred places, with multiple scales and dimensions, allows the human being to identify and structure his times and spaces of daily life. Thus, each individual, due to his unique identity and life and religious paths, creates his own network of sacred places. The territorial expression of religious identity allows to draw a variable and unique geography of sacred places. Through the case study of the practicing Catholic population in the diocese of Coimbra (Portugal), the aim is to study the territorial expression of the religious identity of the different local communities of this city. Through a survey of six parishes in the city, we sought to identify which factors, qualitative or not, define the different territorial expressions on a local, national and international scale, with emphasis on the socioeconomic profile of the population, the religious path of the believers, the religious group they belong to and the external interferences, religious or not. The analysis of these factors allows us to categorize the communities of the city of Coimbra and, for each typology or category, to identify the specific elements that unite the believers to the sacred places, the networks and religious territories that structure the religious practice and experience and also the non-representational landscape that unifies and creates memory. We conclude that an apparently homogeneous group, the Catholic community, incorporates multitemporalities and multiterritorialities that are necessary to understand the history and geography of a whole country and of the Catholic communities in particular.Keywords: geography of religion, sacred places, territoriality, Catholic Church
Procedia PDF Downloads 323668 Winter Wheat Yield Forecasting Using Sentinel-2 Imagery at the Early Stages
Authors: Chunhua Liao, Jinfei Wang, Bo Shan, Yang Song, Yongjun He, Taifeng Dong
Abstract:
Winter wheat is one of the main crops in Canada. Forecasting of within-field variability of yield in winter wheat at the early stages is essential for precision farming. However, the crop yield modelling based on high spatial resolution satellite data is generally affected by the lack of continuous satellite observations, resulting in reducing the generalization ability of the models and increasing the difficulty of crop yield forecasting at the early stages. In this study, the correlations between Sentinel-2 data (vegetation indices and reflectance) and yield data collected by combine harvester were investigated and a generalized multivariate linear regression (MLR) model was built and tested with data acquired in different years. It was found that the four-band reflectance (blue, green, red, near-infrared) performed better than their vegetation indices (NDVI, EVI, WDRVI and OSAVI) in wheat yield prediction. The optimum phenological stage for wheat yield prediction with highest accuracy was at the growing stages from the end of the flowering to the beginning of the filling stage. The best MLR model was therefore built to predict wheat yield before harvest using Sentinel-2 data acquired at the end of the flowering stage. Further, to improve the ability of the yield prediction at the early stages, three simple unsupervised domain adaptation (DA) methods were adopted to transform the reflectance data at the early stages to the optimum phenological stage. The winter wheat yield prediction using multiple vegetation indices showed higher accuracy than using single vegetation index. The optimum stage for winter wheat yield forecasting varied with different fields when using vegetation indices, while it was consistent when using multispectral reflectance and the optimum stage for winter wheat yield prediction was at the end of flowering stage. The average testing RMSE of the MLR model at the end of the flowering stage was 604.48 kg/ha. Near the booting stage, the average testing RMSE of yield prediction using the best MLR was reduced to 799.18 kg/ha when applying the mean matching domain adaptation approach to transform the data to the target domain (at the end of the flowering) compared to that using the original data based on the models developed at the booting stage directly (“MLR at the early stage”) (RMSE =1140.64 kg/ha). This study demonstrated that the simple mean matching (MM) performed better than other DA methods and it was found that “DA then MLR at the optimum stage” performed better than “MLR directly at the early stages” for winter wheat yield forecasting at the early stages. The results indicated that the DA had a great potential in near real-time crop yield forecasting at the early stages. This study indicated that the simple domain adaptation methods had a great potential in crop yield prediction at the early stages using remote sensing data.Keywords: wheat yield prediction, domain adaptation, Sentinel-2, within-field scale
Procedia PDF Downloads 64667 Reaching the Goals of Routine HIV Screening Programs: Quantifying and Implementing an Effective HIV Screening System in Northern Nigeria Facilities Based on Optimal Volume Analysis
Authors: Folajinmi Oluwasina, Towolawi Adetayo, Kate Ssamula, Penninah Iutung, Daniel Reijer
Abstract:
Objective: Routine HIV screening has been promoted as an essential component of efforts to reduce incidence, morbidity, and mortality. The objectives of this study were to identify the optimal annual volume needed to realize the public health goals of HIV screening in the AIDS Healthcare Foundation supported hospitals and establish an implementation process to realize that optimal annual volume. Methods: Starting in 2011 a program was established to routinize HIV screening within communities and government hospitals. In 2016 Five-years of HIV screening data were reviewed to identify the optimal annual proportions of age-eligible patients screened to realize the public health goals of reducing new diagnoses and ending late-stage diagnosis (tracked as concurrent HIV/AIDS diagnosis). Analysis demonstrated that rates of new diagnoses level off when 42% of age-eligible patients were screened, providing a baseline for routine screening efforts; and concurrent HIV/AIDS diagnoses reached statistical zero at screening rates of 70%. Annual facility based targets were re-structured to meet these new target volumes. Restructuring efforts focused on right-sizing HIV screening programs to align and transition programs to integrated HIV screening within standard medical care and treatment. Results: Over one million patients were screened for HIV during the five years; 16, 033 new HIV diagnoses and access to care and treatment made successfully for 82 % (13,206), and concurrent diagnosis rates went from 32.26% to 25.27%. While screening rates increased by 104.7% over the 5-years, volume analysis demonstrated that rates need to further increase by 62.52% to reach desired 20% baseline and more than double to reach optimal annual screening volume. In 2011 facility targets for HIV screening were increased to reflect volume analysis, and in that third year, 12 of the 19 facilities reached or exceeded new baseline targets. Conclusions and Recommendation: Quantifying targets against routine HIV screening goals identified optimal annual screening volume and allowed facilities to scale their program size and allocate resources accordingly. The program transitioned from utilizing non-evidence based annual volume increases to establishing annual targets based on optimal volume analysis. This has allowed efforts to be evaluated on the ability to realize quantified goals related to the public health value of HIV screening. Optimal volume analysis helps to determine the size of an HIV screening program. It is a public health tool, not a tool to determine if an individual patient should receive screening.Keywords: HIV screening, optimal volume, HIV diagnosis, routine
Procedia PDF Downloads 263666 Multifunctional Epoxy/Carbon Laminates Containing Carbon Nanotubes-Confined Paraffin for Thermal Energy Storage
Authors: Giulia Fredi, Andrea Dorigato, Luca Fambri, Alessandro Pegoretti
Abstract:
Thermal energy storage (TES) is the storage of heat for later use, thus filling the gap between energy request and supply. The most widely used materials for TES are the organic solid-liquid phase change materials (PCMs), such as paraffin. These materials store/release a high amount of latent heat thanks to their high specific melting enthalpy, operate in a narrow temperature range and have a tunable working temperature. However, they suffer from a low thermal conductivity and need to be confined to prevent leakage. These two issues can be tackled by confining PCMs with carbon nanotubes (CNTs). TES applications include the buildings industry, solar thermal energy collection and thermal management of electronics. In most cases, TES systems are an additional component to be added to the main structure, but if weight and volume savings are key issues, it would be advantageous to embed the TES functionality directly in the structure. Such multifunctional materials could be employed in the automotive industry, where the diffusion of lightweight structures could complicate the thermal management of the cockpit environment or of other temperature sensitive components. This work aims to produce epoxy/carbon structural laminates containing CNT-stabilized paraffin. CNTs were added to molten paraffin in a fraction of 10 wt%, as this was the minimum amount at which no leakage was detected above the melting temperature (45°C). The paraffin/CNT blend was cryogenically milled to obtain particles with an average size of 50 µm. They were added in various percentages (20, 30 and 40 wt%) to an epoxy/hardener formulation, which was used as a matrix to produce laminates through a wet layup technique, by stacking five plies of a plain carbon fiber fabric. The samples were characterized microstructurally, thermally and mechanically. Differential scanning calorimetry (DSC) tests showed that the paraffin kept its ability to melt and crystallize also in the laminates, and the melting enthalpy was almost proportional to the paraffin weight fraction. These thermal properties were retained after fifty heating/cooling cycles. Laser flash analysis showed that the thermal conductivity through the thickness increased with an increase of the PCM, due to the presence of CNTs. The ability of the developed laminates to contribute to the thermal management was also assessed by monitoring their cooling rates through a thermal camera. Three-point bending tests showed that the flexural modulus was only slightly impaired by the presence of the paraffin/CNT particles, while a more sensible decrease of the stress and strain at break and the interlaminar shear strength was detected. Optical and scanning electron microscope images revealed that these could be attributed to the preferential location of the PCM in the interlaminar region. These results demonstrated the feasibility of multifunctional structural TES composites and highlighted that the PCM size and distribution affect the mechanical properties. In this perspective, this group is working on the encapsulation of paraffin in a sol-gel derived organosilica shell. Submicron spheres have been produced, and the current activity focuses on the optimization of the synthesis parameters to increase the emulsion efficiency.Keywords: carbon fibers, carbon nanotubes, lightweight materials, multifunctional composites, thermal energy storage
Procedia PDF Downloads 160665 Teaching of Entrepreneurship and Innovation in Brazilian Universities
Authors: Marcelo T. Okano, Oduvaldo Vendrametto, Osmildo S. Santos, Marcelo E. Fernandes, Heide Landi
Abstract:
Teaching of entrepreneurship and innovation in Brazilian universities has increased in recent years due to several factors such as the emergence of disciplines like biotechnology increased globalization reduced basic funding and new perspectives on the role of the university in the system of knowledge production Innovation is increasingly seen as an evolutionary process that involves different institutional spheres or sectors in society Entrepreneurship is a milestone on the road towards economic progress, and makes a huge contribution towards the quality and future hopes of a sector, economy or even a country. Entrepreneurship is as important in small and medium-sized enterprises (SMEs) and local markets as in large companies, and national and international markets, and is just as key a consideration for public companies as or private organizations. Entrepreneurship helps to encourage the competition in the current environment that leads to the effects of globalization. There is an increasing tendency for government policy to promote entrepreneurship for its apparent economic benefit. Accordingly, governments seek to employ entrepreneurship education as a means to stimulate increased levels of economic activity. Entrepreneurship education and training (EET) is growing rapidly in universities and colleges throughout the world, and governments are supporting it both directly and through funding major investments in advice-provision to would-be entrepreneurs and existing small businesses. The Triple Helix of university–industry–government relations is compared with alternative models for explaining the current research system in its social contexts. Communications and negotiations between institutional partners generate an overlay that increasingly reorganizes the underlying arrangements. To achieve the objective of this research was a survey of the literature on the entrepreneurship and innovation and then a field research with 100 students of Fatec. To collect the data needed for analysis, we used the exploratory research of a qualitative nature. We asked to respondents what degree of knowledge over ten related to entrepreneurship and innovation topics, responses were answered in a Likert scale with 4 levels, none, small, medium and large. We can conclude that the terms such as entrepreneurship and innovation are known by most students because the university propagates them across disciplines, lectures, and institutes innovation. The more specific items such as canvas and Design thinking model are unknown by most respondents. The importance of the University in teaching innovation and entrepreneurship in the transmission of this knowledge to the students in order to equalize the knowledge. As a future project, these items will be re-evaluated to create indicators for measuring the knowledge level.Keywords: Brazilian universities, entrepreneurship, innovation, entrepreneurship, globalization
Procedia PDF Downloads 507664 Ultrasound Assisted Alkaline Potassium Permanganate Pre-Treatment of Spent Coffee Waste
Authors: Rajeev Ravindran, Amit K. Jaiswal
Abstract:
Lignocellulose is the largest reservoir of inexpensive, renewable source of carbon. It is composed of lignin, cellulose and hemicellulose. Cellulose and hemicellulose is composed of reducing sugars glucose, xylose and several other monosaccharides which can be metabolised by microorganisms to produce several value added products such as biofuels, enzymes, aminoacids etc. Enzymatic treatment of lignocellulose leads to the release of monosaccharides such as glucose and xylose. However, factors such as the presence of lignin, crystalline cellulose, acetyl groups, pectin etc. contributes to recalcitrance restricting the effective enzymatic hydrolysis of cellulose and hemicellulose. In order to overcome these problems, pre-treatment of lignocellulose is generally carried out which essentially facilitate better degradation of lignocellulose. A range of pre-treatment strategy is commonly employed based on its mode of action viz. physical, chemical, biological and physico-chemical. However, existing pretreatment strategies result in lower sugar yield and formation of inhibitory compounds. In order to overcome these problems, we proposes a novel pre-treatment, which utilises the superior oxidising capacity of alkaline potassium permanganate assisted by ultra-sonication to break the covalent bonds in spent coffee waste to remove recalcitrant compounds such as lignin. The pre-treatment was conducted for 30 minutes using 2% (w/v) potassium permanganate at room temperature with solid to liquid ratio of 1:10. The pre-treated spent coffee waste (SCW) was subjected to enzymatic hydrolysis using enzymes cellulase and hemicellulase. Shake flask experiments were conducted with a working volume of 50mL buffer containing 1% substrate. The results showed that the novel pre-treatment strategy yielded 7 g/L of reducing sugar as compared to 3.71 g/L obtained from biomass that had undergone dilute acid hydrolysis after 24 hours. From the results obtained it is fairly certain that ultrasonication assists the oxidation of recalcitrant components in lignocellulose by potassium permanganate. Enzyme hydrolysis studies suggest that ultrasound assisted alkaline potassium permanganate pre-treatment is far superior over treatment by dilute acid. Furthermore, SEM, XRD and FTIR were carried out to analyse the effect of the new pre-treatment strategy on structure and crystallinity of pre-treated spent coffee wastes. This novel one-step pre-treatment strategy was implemented under mild conditions and exhibited high efficiency in the enzymatic hydrolysis of spent coffee waste. Further study and scale up is in progress in order to realise future industrial applications.Keywords: spent coffee waste, alkaline potassium permanganate, ultra-sonication, physical characterisation
Procedia PDF Downloads 357663 Rheological and Microstructural Characterization of Concentrated Emulsions Prepared by Fish Gelatin
Authors: Helen S. Joyner (Melito), Mohammad Anvari
Abstract:
Concentrated emulsions stabilized by proteins are systems of great importance in food, pharmaceutical and cosmetic products. Controlling emulsion rheology is critical for ensuring desired properties during formation, storage, and consumption of emulsion-based products. Studies on concentrated emulsions have focused on rheology of monodispersed systems. However, emulsions used for industrial applications are polydispersed in nature, and this polydispersity is regarded as an important parameter that also governs the rheology of the concentrated emulsions. Therefore, the objective of this study was to characterize rheological (small and large deformation behaviors) and microstructural properties of concentrated emulsions which were not truly monodispersed as usually encountered in food products such as margarines, mayonnaise, creams, spreads, and etc. The concentrated emulsions were prepared at different concentrations of fish gelatin (0.2, 0.4, 0.8% w/v in the whole emulsion system), oil-water ratio 80-20 (w/w), homogenization speed 10000 rpm, and 25oC. Confocal laser scanning microscopy (CLSM) was used to determine the microstructure of the emulsions. To prepare samples for CLSM analysis, FG solutions were stained by Fluorescein isothiocyanate dye. Emulsion viscosity profiles were determined using shear rate sweeps (0.01 to 100 1/s). The linear viscoelastic regions (LVRs) of the emulsions were determined using strain sweeps (0.01 to 100% strain) for each sample. Frequency sweeps were performed in the LVR (0.1% strain) from 0.6 to 100 rad/s. Large amplitude oscillatory shear (LAOS) testing was conducted by collecting raw waveform data at 0.05, 1, 10, and 100% strain at 4 different frequencies (0.5, 1, 10, and 100 rad/s). All measurements were performed in triplicate at 25oC. The CLSM results revealed that increased fish gelatin concentration resulted in more stable oil-in-water emulsions with homogeneous, finely dispersed oil droplets. Furthermore, the protein concentration had a significant effect on emulsion rheological properties. Apparent viscosity and dynamic moduli at small deformations increased with increasing fish gelatin concentration. These results were related to increased inter-droplet network connections caused by increased fish gelatin adsorption at the surface of oil droplets. Nevertheless, all samples showed shear-thinning and weak gel behaviors over shear rate and frequency sweeps, respectively. Lissajous plots, or plots of stress versus strain, and phase lag values were used to determine nonlinear behavior of the emulsions in LAOS testing. Greater distortion in the elliptical shape of the plots followed by higher phase lag values was observed at large strains and frequencies in all samples, indicating increased nonlinear behavior. Shifts from elastic-dominated to viscous dominated behavior were also observed. These shifts were attributed to damage to the sample microstructure (e.g. gel network disruption), which would lead to viscous-type behaviors such as permanent deformation and flow. Unlike the small deformation results, the LAOS behavior of the concentrated emulsions was not dependent on fish gelatin concentration. Systems with different microstructures showed similar nonlinear viscoelastic behaviors. The results of this study provided valuable information that can be used to incorporate concentrated emulsions in emulsion-based food formulations.Keywords: concentrated emulsion, fish gelatin, microstructure, rheology
Procedia PDF Downloads 275