Search results for: 8-9 year old school children
709 The Basketball Show in the North of France: When the NBA Globalized Culture Meets the Local Carnival Culture
Authors: David Sudre
Abstract:
Today, the National Basketball Association (NBA) is the cultural model of reference for most of the French basketball community stakeholders (players, coaches, team and league managers). In addition to the strong impact it has on how this sport is played and perceived, the NBA also influences the ways professional basketball shows are organized in France (within the Jeep Elite league). The objective of this research is to see how and to what extent the NBA show, as a globalized cultural product, disrupts Jeep Elite's professional basketball cultural codes in the organization of its shows. The article will aim at questioning the intercultural phenomenon at stake in sports cultures in France through the prism of the basketball match. This angle will shed some light on the underlying relationships between local and global elements. The results of this research come from a one-year survey conducted in a small town in northern France, Le Portel, where the Etoile Sportive Saint Michel (ESSM), a Jeep Elite's club, operates. An ethnographic approach was favored. It entailed many participating observations and semi-directive interviews with supporters of the ESSM Le Portel. Through this ethnographic work with the team's fan groups (before the games, during the games and after the games), it was possible for the researchers to understand better all the cultural and identity issues that play out in the "Cauldron," the basketball arena of the ESSM Le Portel. The results demonstrate, at first glance, that many basketball events organized in France are copied from the American model. It seems difficult not to try to imitate the American reference that the NBA represents, whether it be at the French All-Star Game or a Jeep Elite Game at Le Portel. In this case, an acculturation process seems to occur, not only in the way people play but also in the creation of the show (cheerleaders, animations, etc.). However, this American culture of globalized basketball, although re-appropriated, is also being modified by the members of ESSM Le Portel within their locality. Indeed, they juggle between their culture of origin and their culture of reference to build their basketball show within their sociocultural environment. In this way, Le Portel managers and supporters introduce elements that are characteristic of their local culture into the show, such as carnival customs and celebrations, two ingredients that fully contribute to the creation of their identity. Ultimately, in this context of "glocalization," this research will ascertain, on the one hand, that the identity of French basketball becomes harder to outline, and, on the other hand, that the "Cauldron" turns out to be a place to preserve (fantasized) local identities, or even a place of (unconscious) resistance to the dominant model of American basketball culture.Keywords: basketball, carnival, culture, globalization, identity, show, sport, supporters.
Procedia PDF Downloads 151708 Influence of Sewage Sludge on Agricultural Land Quality and Crop
Authors: Catalina Iticescu, Lucian P. Georgescu, Mihaela Timofti, Gabriel Murariu
Abstract:
Since the accumulation of large quantities of sewage sludge is producing serious environmental problems, numerous environmental specialists are looking for solutions to solve this problem. The sewage sludge obtained by treatment of municipal wastewater may be used as fertiliser on agricultural soils because such sludge contains large amounts of nitrogen, phosphorus and organic matter. In many countries, sewage sludge is used instead of chemical fertilizers in agriculture, this being the most feasible method to reduce the increasingly larger quantities of sludge. The use of sewage sludge on agricultural soils is allowed only with a strict monitoring of their physical and chemical parameters, because heavy metals exist in varying amounts in sewage sludge. Exceeding maximum permitted quantities of harmful substances may lead to pollution of agricultural soil and may cause their removal aside because the plants may take up the heavy metals existing in soil and these metals will most probably be found in humans and animals through food. The sewage sludge analyzed for the present paper was extracted from the Wastewater Treatment Station (WWTP) Galati, Romania. The physico-chemical parameters determined were: pH (upH), total organic carbon (TOC) (mg L⁻¹), N-total (mg L⁻¹), P-total (mg L⁻¹), N-NH₄ (mg L⁻¹), N-NO₂ (mg L⁻¹), N-NO₃ (mg L⁻¹), Fe-total (mg L⁻¹), Cr-total (mg L⁻¹), Cu (mg L⁻¹), Zn (mg L⁻¹), Cd (mg L⁻¹), Pb (mg L⁻¹), Ni (mg L⁻¹). The determination methods were electrometrical (pH, C, TSD) - with a portable HI 9828 HANNA electrodes committed multiparameter and spectrophotometric - with a Spectroquant NOVA 60 - Merck spectrophotometer and with specific Merck parameter kits. The tests made pointed out the fact that the sludge analysed is low heavy metal falling within the legal limits, the quantities of metals measured being much lower than the maximum allowed. The results of the tests made to determine the content of nutrients in the sewage sludge have shown that the existing nutrients may be used to increase the fertility of agricultural soils. Other tests were carried out on lands where sewage sludge was applied in order to establish the maximum quantity of sludge that may be used so as not to constitute a source of pollution. The tests were made on three plots: a first batch with no mud and no chemical fertilizers applied, a second batch on which only sewage sludge was applied, and a third batch on which small amounts of chemical fertilizers were applied in addition to sewage sludge. The results showed that the production increases when the soil is treated with sludge and small amounts of chemical fertilizers. Based on the results of the present research, a fertilization plan has been suggested. This plan should be reconsidered each year based on the crops planned, the yields proposed, the agrochemical indications, the sludge analysis, etc.Keywords: agricultural use, crops, physico–chemical parameters, sewage sludge
Procedia PDF Downloads 291707 Analysis of Storm Flood in Typical Sewer Networks in High Mountain Watersheds of Colombia Based on SWMM
Authors: J. C. Hoyos, J. Zambrano Nájera
Abstract:
Increasing urbanization has led to changes in the natural dynamics of watersheds, causing problems such as increases in volumes of runoff, peak flow rates, and flow rates so that the risk of storm flooding increases. Sewerage networks designed 30 – 40 years ago don’t account for these increases in flow volumes and velocities. Besides, Andean cities with high slopes worsen the problem because velocities are even higher not allowing sewerage network work and causing cities less resilient to landscape changes and climatic change. In Latin America, especially Colombia, this is a major problem because urban population at late XX century was more than 70% is in urban areas increasing approximately in 790% in 1940-1990 period. Thus, it becomes very important to study how changes in hydrological behavior affect hydraulic capacity of sewerage networks in Andean Urban Watersheds. This research aims to determine the impact of urbanization in high-sloped urban watersheds in its hydrology. To this end it will be used as study area experimental urban watershed named Palogrande-San Luis watershed, located in the city of Manizales, Colombia. Manizales is a city in central western Colombia, located in Colombian Central Mountain Range (part of Los Andes Mountains) with an abrupt topography (average altitude is 2.153 m). The climate in Manizales is quite uniform, but due to its high altitude it presents high precipitations (1.545 mm/year average) with high humidity (83% average). Behavior of the current sewerage network will be reviewed by the hydraulic model SWMM (Storm Water Management Model). Based on SWMM the hydrological response of urban watershed selected will be evaluated under the design storm with different frequencies in the region, such as drainage effect and water-logging, overland flow on roads, etc. Cartographic information was obtained from a Geographic Information System (GIS) thematic maps of the Institute of Environmental Studies of the Universidad Nacional de Colombia and the utility Aguas de Manizales S.A. Rainfall and streamflow data is obtained from 4 rain gages and 1 stream gages. This information will allow determining critical issues on drainage systems design in urban watershed with very high slopes, and which practices will be discarded o recommended.Keywords: land cover changes, storm sewer system, urban hydrology, urban planning
Procedia PDF Downloads 263706 A Lung Cancer Patient Grief Counseling Nursing Experience
Authors: Syue-Wen Lin
Abstract:
Objective: This article explores the nursing experience of a 64-year-old female lung cancer patient who underwent a thoracoscopic left lower lobectomy and treatment. The patient has a history of diabetes. The nursing process included cancer treatment, postoperative pain management, wound care and healing, and family grief counseling. Methods: The nursing period is from March 11 to March 15, 2024. During this time, strict aseptic wound dressing procedures and advanced wound care techniques are employed to promote wound healing and prevent infection. Postoperatively, due to the development of aspiration pneumonia and worsening symptoms, re-intubation was necessary. Given the patient's advanced cancer and deteriorating condition, the nursing team provided comprehensive grief counseling and care tailored to both the patient's physical and psychological needs, as well as the emotional needs of the family. Considering the complexity of the patient's condition, including advanced cancer, palliative care was also integrated into the overall nursing process to alleviate discomfort and provide psychological support. Results: Using Gordon's Functional Health Patterns for assessment, including evaluating the patient's medical history, physical assessment, and interviews, to provide individualized nursing care, it is important to collect data that will help understand the patient's physical, psychological, social, and spiritual dimensions. The interprofessional critical care team collaborates with the hospice team to help understand the psychological state of the patient's family and develop a comprehensive approach to care. Family meetings should be convened, and support should be provided to patients during the final stages of their lives. Additionally, the combination of cancer care, pain management, wound care, and palliative care ensures comprehensive support for the patient throughout her recovery, thereby improving her quality of life. Conclusion: Lung cancer and aspiration pneumonia present significant challenges to patients, and the nursing team not only provides critical care but also addresses individual patient needs through cancer care, pain management, wound care, and palliative care interventions. These measures have effectively improved the quality of life of patients, provided compassionate palliative care to terminally ill patients, and allowed them to spend the last mile of their lives with their families. Nursing staff work closely with families to develop comprehensive care plans to ensure patients receive high-quality medical care as well as psychological support and a comfortable recovery environment.Keywords: grief counseling, lung cancer, palliative care, nursing experience
Procedia PDF Downloads 30705 Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems
Authors: Alexander J. Severinsky
Abstract:
Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.Keywords: greenhouse radiative force, greenhouse air temperature, greenhouse thermodynamics, greenhouse historical, greenhouse radiative force on ice, greenhouse radiative force on plants, greenhouse radiative force in air
Procedia PDF Downloads 105704 Faculty Use of Geospatial Tools for Deep Learning in Science and Engineering Courses
Authors: Laura Rodriguez Amaya
Abstract:
Advances in science, technology, engineering, and mathematics (STEM) are viewed as important to countries’ national economies and their capacities to be competitive in the global economy. However, many countries experience low numbers of students entering these disciplines. To strengthen the professional STEM pipelines, it is important that students are retained in these disciplines at universities. Scholars agree that to retain students in universities’ STEM degrees, it is necessary that STEM course content shows the relevance of these academic fields to their daily lives. By increasing students’ understanding on the importance of these degrees and careers, students’ motivation to remain in these academic programs can also increase. An effective way to make STEM content relevant to students’ lives is the use of geospatial technologies and geovisualization in the classroom. The Geospatial Revolution, and the science and technology associated with it, has provided scientists and engineers with an incredible amount of data about Earth and Earth systems. This data can be used in the classroom to support instruction and make content relevant to all students. The purpose of this study was to find out the prevalence use of geospatial technologies and geovisualization as teaching practices in a USA university. The Teaching Practices Inventory survey, which is a modified version of the Carl Wieman Science Education Initiative Teaching Practices Inventory, was selected for the study. Faculty in the STEM disciplines that participated in a summer learning institute at a 4-year university in the USA constituted the population selected for the study. One of the summer learning institute’s main purpose was to have an impact on the teaching of STEM courses, particularly the teaching of gateway courses taken by many STEM majors. The sample population for the study is 97.5 of the total number of summer learning institute participants. Basic descriptive statistics through the Statistical Package for the Social Sciences (SPSS) were performed to find out: 1) The percentage of faculty using geospatial technologies and geovisualization; 2) Did the faculty associated department impact their use of geospatial tools?; and 3) Did the number of years in a teaching capacity impact their use of geospatial tools? Findings indicate that only 10 percent of respondents had used geospatial technologies, and 18 percent had used geospatial visualization. In addition, the use of geovisualization among faculty of different disciplines was broader than the use of geospatial technologies. The use of geospatial technologies concentrated in the engineering departments. Data seems to indicate the lack of incorporation of geospatial tools in STEM education. The use of geospatial tools is an effective way to engage students in deep STEM learning. Future research should look at the effect on student learning and retention in science and engineering programs when geospatial tools are used.Keywords: engineering education, geospatial technology, geovisualization, STEM
Procedia PDF Downloads 253703 Data Quality and Associated Factors on Regular Immunization Programme at Ararso District: Somali Region- Ethiopia
Authors: Eyob Seife, Molla Alemayaehu, Tesfalem Teshome, Bereket Seyoum, Behailu Getachew
Abstract:
Globally, immunization averts between 2 and 3 million deaths yearly, but Vaccine-Preventable Diseases still account for more in Sub-Saharan African countries and takes the majority of under-five deaths yearly, which indicates the need for consistent and on-time information to have evidence-based decision so as to save lives of these vulnerable groups. However, ensuring data of sufficient quality and promoting an information-use culture at the point of collection remains critical and challenging, especially in remote areas where the Ararso district is selected based on a hypothesis of there is a difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Ararso district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers and reporting documents were reviewed at 4 health facilities (1 health center and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio, availability and timeliness of reports. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and at the district health office. A quality index (QI), availability and timeliness of reports were assessed. Accuracy ratios formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), TT2+ and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed poor timeliness at all levels and both over-reporting and under-reporting were observed at all levels when computing the accuracy ratio of registration to health post reports found at health centers for almost all antigens verified. A quality index (QI) of all facilities also showed poor results. Most of the verified immunization data accuracy ratios were found to be relatively better than that of quality index and timeliness of reports. So attention should be given to improving the capacity of staff, timeliness of reports and quality of monitoring system components, namely recording, reporting, archiving, data analysis and using information for decisions at all levels, especially in remote and areas.Keywords: accuracy ratio, ararso district, quality of monitoring system, regular immunization program, timeliness of reports, Somali region-Ethiopia
Procedia PDF Downloads 73702 Conceptualizing the Moroccan Amazigh
Authors: Sanaa Riaz
Abstract:
The free people, Amazigh (plural Imazighen), often known by the more popular exonym, Berber, are spread across several North African countries with the highest population in Morocco have been substantially misunderstood and differentially showcased by entities from western-school educated scholars to human, health and women’s rights organizations, to the State to the international community. This paper is an examination of the various conceptualization of the Imazighen. With the popularity of the Arab Spring movement to oust monarchical and dictatorial rulers across the Middle East and North Africa in Morocco, the Moroccan monarchy introduced various reform programs to win public favor. These included social, economic and educational reforms to incorporate marginalized groups such as the Imazighen. The monarchy has ushered Amazigh representation in public offices and landscape through Amazigh script, even though theirs has been an oral culture. After the Arab Spring, the Justice and Development party, an Islamist party took over in Morocco due to its accessibility to the masses, In Sept. 2021, unlike the case of Egypt and Tunisia where military and constitutional means were sought, Morocco successfully removed it from power through the ballot, resulting in a real victory for the neutral monarchy and its representation as a moderate, secular and liberal force for the nation. As a result, supporting the perpetuation of Amazigh linguistic identity also became synonymous to making a secular statement as a Muslim. It has led to the telling of Amazigh identity at state museums as one representing the indigenous, pure, diverse, culturally-rich and united Morocco. Reform efforts have also prioritized an amiable look towards the economic and familial links of Moroccan Jews with the few thousand families still left in the country and a showcasing through museums and cultural centers of the Jewish identity as Moroccan first. In that endeavor, it is interesting to note the coverage of Jews as the indigenous of Morocco through the embracing of their “folk” cultural and religious practices, those that are not continued outside Morocco. In this epistemology, the concept of the Moroccan Jew becomes similar to the indigenous Amazigh, both cherished as the oldest peoples of Morocco and symbols of its unity and resilience. In the urban discourse, Amazigh identity is a concept that continues to be part of the deliberations of elites and scholars graduating from French schools on the incorporation of rural and illiterate Morocco in economic and educational advancement. Yet, with the constant influx of migrants from Western Sahara into cities like Fez and Marrakesh, Amazigh has often been described as the umbrella term of those of “mixed” ethnic ancestry who constitute the country’s free population. In sum, Amazigh identity highlights the changing discourse on marginalized communities, human rights, representation, Moroccan nationhood, and regional and transnational politics. The aim of this paper is to analyze perceptions of Amazigh identity in Morocco post-2021 ousting of the Islamist party using data from state-sponsored museum displays and cultural centers collected in Summer 2022 and scholarly analyses of Amazigh identity, representation and rights in Morocco.Keywords: Amazigh identity, Morocco, representation, state politics
Procedia PDF Downloads 95701 Impact of Weather Conditions on Non-Food Retailers and Implications for Marketing Activities
Authors: Noriyuki Suyama
Abstract:
This paper discusses purchasing behavior in retail stores, with a particular focus on the impact of weather changes on customers' purchasing behavior. Weather conditions are one of the factors that greatly affect the management and operation of retail stores. However, there is very little research on the relationship between weather conditions and marketing from an academic perspective, although there is some importance from a practical standpoint and knowledge based on experience. For example, customers are more hesitant to go out when it rains than when it is sunny, and they may postpone purchases or buy only the minimum necessary items even if they do go out. It is not difficult to imagine that weather has a significant impact on consumer behavior. To the best of the authors' knowledge, there have been only a few studies that have delved into the purchasing behavior of individual customers. According to Hirata (2018), the economic impact of weather in the United States is estimated to be 3.4% of GDP, or "$485 billion ± $240 billion per year. However, weather data is not yet fully utilized. Representative industries include transportation-related industries (e.g., airlines, shipping, roads, railroads), leisure-related industries (e.g., leisure facilities, event organizers), energy and infrastructure-related industries (e.g., construction, factories, electricity and gas), agriculture-related industries (e.g., agricultural organizations, producers), and retail-related industries (e.g., retail, food service, convenience stores, etc.). This paper focuses on the retail industry and advances research on weather. The first reason is that, as far as the author has investigated the retail industry, only grocery retailers use temperature, rainfall, wind, weather, and humidity as parameters for their products, and there are very few examples of academic use in other retail industries. Second, according to NBL's "Toward Data Utilization Starting from Consumer Contact Points in the Retail Industry," labor productivity in the retail industry is very low compared to other industries. According to Hirata (2018) mentioned above, improving labor productivity in the retail industry is recognized as a major challenge. On the other hand, according to the "Survey and Research on Measurement Methods for Information Distribution and Accumulation (2013)" by the Ministry of Internal Affairs and Communications, the amount of data accumulated by each industry is extremely large in the retail industry, so new applications are expected by analyzing these data together with weather data. Third, there is currently a wealth of weather-related information available. There are, for example, companies such as WeatherNews, Inc. that make weather information their business and not only disseminate weather information but also disseminate information that supports businesses in various industries. Despite the wide range of influences that weather has on business, the impact of weather has not been a subject of research in the retail industry, where business models need to be imagined, especially from a micro perspective. In this paper, the author discuss the important aspects of the impact of weather on marketing strategies in the non-food retail industry.Keywords: consumer behavior, weather marketing, marketing science, big data, retail marketing
Procedia PDF Downloads 84700 Interventions to Improve the Performance of Community Based Health Insurance in Low- and Lower Middle-Income-Countries: a Systematic Review
Authors: Scarlet Tabot Enanga Longsti
Abstract:
Community-Based Health Insurance (CBHI) schemes have been proposed as a possible means to achieve affordable health care in low-and lower-middle-income countries. The existing evidence provides mixed results on the impact of CBHI schemes on healthcare utilisation and out -of-pocket payments (OOPP) for healthcare. Over 900 CBHI schemes have been implemented in underdeveloped countries, and these schemes have undergone different modifications over the years. Prior reviews have suggested that different designs of CBHI schemes may result in different outcomes. Objectives: This review sought to determine the interventions that affect the impact of CBHI schemes on OOPP and health service utilisation. Interventions in this study referred to any action or modification in the design of a CBHI scheme that affected the impact of the scheme on OOPP and/or healthcare utilization. Methods: Any CBHI study that was done in a lower middle-income country, that used an experimental design, that included OOPP or health care utilisation as outcome variables, and that was published in either English or French was included in this study. Studies were searched for in MEDLINE, Embase, CINAHL, EconLit, IBSS, Web of Science, Cochrane Library, and Global Index Medicus from July to August 2023. Bias was assessed using Joanna Brigs Institute tools for quality assessment for randomized control trials and quasi experimental studies. A narrative synthesis was done. Results: 12 studies were included in the review, with a total of 69 villages, 13,653 households, and 62,786 participants. Average premium collection was 4.8 USD/year. Most CBHI schemes had flat rates. The study revealed that a range of interventions impact OOPP and health care utilisation. Five categories of interventions were identified. The intervention with the highest impact on OOPP and utilisation was “Audit visits”. Next in line came external funds, training scheme workers, and engaging community leaders and village heads to advertise the scheme. Free healthcare led to a significant increase in utilisation of health services, a significant reduction in Catastrophic health expenditure, but an insignificant effect on OOPP among insured compared with uninsured. Conclusions: Community-Based Health Insurance could pave the way for Universal Health Care in low and middle-income countries. However, this can only be possible if careful thought is given to how schemes are designed. Due to the heterogeneity of studies and results on CBHI schemes, there is need for further research for more effective designs to be developed.Keywords: community based health insurance, developing countries, health service utilisation, out of pocket payment
Procedia PDF Downloads 67699 Knowledge and Attitude Towards Strabismus Among Adult Residents in Woreta Town, Northwest Ethiopia: A Community-Based Study
Authors: Henok Biruk Alemayehu, Kalkidan Berhane Tsegaye, Fozia Seid Ali, Nebiyat Feleke Adimassu, Getasew Alemu Mersha
Abstract:
Background: Strabismus is a visual disorder where the eyes are misaligned and point in different directions. Untreated strabismus can lead to amblyopia, loss of binocular vision, and social stigma due to its appearance. Since it is assumed that knowledge is pertinent for early screening and prevention of strabismus, the main objective of this study was to assess knowledge and attitudes toward strabismus in Woreta town, Northwest Ethiopia. Providing data in this area is important for planning health policies. Methods: A community-based cross-sectional study was done in Woreta town from April–May 2020. The sample size was determined using a single population proportion formula by taking a 50% proportion of good knowledge, 95% confidence level, 5% margin of errors, and 10% non- response rate. Accordingly, the final computed sample size was 424. All four kebeles were included in the study. There were 42,595 people in total, with 39,684 adults and 9229 house holds. A sample fraction ’’k’’ was obtained by dividing the number of the household by the calculated sample size of 424. Systematic random sampling with proportional allocation was used to select the participating households with a sampling fraction (K) of 21 i.e. each household was approached in every 21 households included in the study. One individual was selected ran- domly from each household with more than one adult, using the lottery method to obtain a final sample size. The data was collected through a face-to-face interview with a pretested and semi-structured questionnaire which was translated from English to Amharic and back to English to maintain its consistency. Data were entered using epi-data version 3.1, then processed and analyzed via SPSS version- 20. Descriptive and analytical statistics were employed to summarize the data. A p-value of less than 0.05 was used to declare statistical significance. Result: A total of 401 individuals aged over 18 years participated, with a response rate of 94.5%. Of those who responded, 56.6% were males. Of all the participants, 36.9% were illiterate. The proportion of people with poor knowledge of strabismus was 45.1%. It was shown that 53.9% of the respondents had a favorable attitude. Older age, higher educational level, having a history of eye examination, and a having a family history of strabismus were significantly associated with good knowledge of strabismus. A higher educational level, older age, and hearing about strabismus were significantly associated with a favorable attitude toward strabismus. Conclusion and recommendation: The proportion of good knowledge and favorable attitude towards strabismus were lower than previously reported in Gondar City, Northwest Ethiopia. There is a need to provide health education and promotion campaigns on strabismus to the community: what strabismus is, its’ possible treatments and the need to bring children to the eye care center for early diagnosis and treatment. it advocate for prospective research endeavors to employ qualitative study design.Additionally, it suggest the exploration of studies that investigate causal-effect relationship.Keywords: strabismus, knowledge, attitude, Woreta
Procedia PDF Downloads 63698 Bridging Minds, Building Success Beyond Metrics: Uncovering Human Influence on Project Performance: Case Study of University of Salford
Authors: David Oyewumi Oyekunle, David Preston, Florence Ibeh
Abstract:
The paper provides an overview of the impacts of the human dimension in project management and team management on projects, which is increasingly affecting the performance of organizations. Recognizing its crucial significance, the research focuses on analyzing the psychological and interpersonal dynamics within project teams. This research is highly significant in the dynamic field of project management, as it addresses important gaps and offers vital insights that align with the constantly changing demands of the profession. A case study was conducted at the University of Salford to examine how human activity affects project management and performance. The study employed a mixed methodology to gain a deeper understanding of the real-world experiences of the subjects and project teams. Data analysis procedures to address the research objectives included the deductive approach, which involves testing a clear hypothesis or theory, as well as descriptive analysis and visualization. The survey comprised a sample size of 40 participants out of 110 project management professionals, including staff and final students in the Salford Business School, using a purposeful sampling method. To mitigate bias, the study ensured diversity in the sample by including both staff and final students. A smaller sample size allowed for more in-depth analysis and a focused exploration of the research objective. Conflicts, for example, are intricate occurrences shaped by a multitude of psychological stimuli and social interactions and may have either a deterrent perspective or a positive perspective on project performance and project management productivity. The study identified conflict elements, including culture, environment, personality, attitude, individual project knowledge, team relationships, leadership, and team dynamics among team members, as crucial human activities to minimize conflict. The findings are highly significant in the dynamic field of project management, as they address important gaps and offer vital insights that align with the constantly changing demands of the profession. It provided project professionals with valuable insights that can help them create a collaborative and high-performing project environment. Uncovering human influence on project performance, effective communication, optimal team synergy, and a keen understanding of project scope are necessary for the management of projects to attain exceptional performance and efficiency. For the research to achieve the aims of this study, it was acknowledged that the productive dynamics of teams and strong group cohesiveness are crucial for effectively managing conflicts in a beneficial and forward-thinking manner. Addressing the identified human influence will contribute to a more sustainable project management approach and offer opportunities for exploration and potential contributions to both academia and practical project management.Keywords: human dimension, project management, team dynamics, conflict resolution
Procedia PDF Downloads 108697 A Collaborative Approach to Improving Mental and Physical Health-Related Outcomes for a Heart Transplant Patient Through Music and Art Therapy Treatment
Authors: Elizabeth Laguaite, Alexandria Purdy
Abstract:
Heart transplant recipients face psycho-physiological stressors, including pain, lengthy hospitalizations, delirium, and existential crises. They pose an increased risk for Post Traumatic Stress Disorder (PTSD) and can be a predictor of poorer mental and physical Health-Related Quality of Life (HRQOL) outcomes and increased mortality. There is limited research on the prevention of Post Traumatic Stress Symptoms (PTSS) in transplant patients. This case report focuses on a collaborative Music and Art Therapy intervention used to improve outcomes for HMH transplant recipient John (Alias). John, a 58-year-old man with congestive heart failure, was admitted to HMH in February of 2021 with cardiogenic shock, cannulated with an Intra-aortic Balloon Pump, Impella 5.5, and Venoarterial Extracorporeal Membrane Oxygenation (VA-ECMO) as a bridge to heart and kidney transplant. He was listed as status 1 for transplant. Music Therapy and Art Therapy (MT and AT) were ordered by the physician for mood regulation, trauma processing and anxiety management. During MT/AT sessions, John reported a history of anxiety and depression exacerbated by medical acuity, shortness of breath, and lengthy hospitalizations. He expressed difficulty sleeping, pain, and existential questions. Initially seen individually by MT/AT, it was determined he could benefit from a collaborative approach due to similar thematic content within sessions. A Life Review intervention was developed by MT/AT. The purpose was for him to creatively express, reflect and process his medical narrative, including the identification of positive and negative events leading up to admission at HMH, the journey to transplant, and his hope for the future. Through this intervention, he created artworks that symbolized each event and paired them with songs, two of which were composed with the MT during treatment. As of September 2023, John has not been readmitted to the hospital and expressed that this treatment is what “got him through transplant”. MT and AT can provide opportunities for a patient to reminisce through creative expression, leading to a shift in the personal meaning of these experiences, promoting resolution, and ameliorating associated trauma. The closer to trauma it is processed, the less likely to develop PTSD. This collaborative MT/AT approach could improve long-term outcomes by reducing mortality and readmission rates for transplant patients.Keywords: art therapy, music therapy, critical care, PTSD, trauma, transplant
Procedia PDF Downloads 81696 Measuring Digital Literacy in the Chilean Workforce
Authors: Carolina Busco, Daniela Osses
Abstract:
The development of digital literacy has become a fundamental element that allows for citizen inclusion, access to quality jobs, and a labor market capable of responding to the digital economy. There are no methodological instruments available in Chile to measure the workforce’s digital literacy and improve national policies on this matter. Thus, the objective of this research is to develop a survey to measure digital literacy in a sample of 200 Chilean workers. Dimensions considered in the instrument are sociodemographics, access to infrastructure, digital education, digital skills, and the ability to use e-government services. To achieve the research objective of developing a digital literacy model of indicators and a research instrument for this purpose, along with an exploratory analysis of data using factor analysis, we used an empirical, quantitative-qualitative, exploratory, non-probabilistic, and cross-sectional research design. The research instrument is a survey created to measure variables that make up the conceptual map prepared from the bibliographic review. Before applying the survey, a pilot test was implemented, resulting in several adjustments to the phrasing of some items. A validation test was also applied using six experts, including their observations on the final instrument. The survey contained 49 items that were further divided into three sets of questions: sociodemographic data; a Likert scale of four values ranked according to the level of agreement; iii) multiple choice questions complementing the dimensions. Data collection occurred between January and March 2022. For the factor analysis, we used the answers to 12 items with the Likert scale. KMO showed a value of 0.626, indicating a medium level of correlation, whereas Bartlett’s test yielded a significance value of less than 0.05 and a Cronbach’s Alpha of 0.618. Taking all factor selection criteria into account, we decided to include and analyze four factors that together explain 53.48% of the accumulated variance. We identified the following factors: i) access to infrastructure and opportunities to develop digital skills at the workplace or educational establishment (15.57%), ii) ability to solve everyday problems using digital tools (14.89%), iii) online tools used to stay connected with others (11.94%), and iv) residential Internet access and speed (11%). Quantitative results were discussed within six focus groups using heterogenic selection criteria related to the most relevant variables identified in the statistical analysis: upper-class school students; middle-class university students; Ph.D. professors; low-income working women, elderly individuals, and a group of rural workers. The digital divide and its social and economic correlations are evident in the results of this research. In Chile, the items that explain the acquisition of digital tools focus on access to infrastructure, which ultimately puts the first filter on the development of digital skills. Therefore, as expressed in the literature review, the advance of these skills is radically different when sociodemographic variables are considered. This increases socioeconomic distances and exclusion criteria, putting those who do not have these skills at a disadvantage and forcing them to seek the assistance of others.Keywords: digital literacy, digital society, workforce digitalization, digital skills
Procedia PDF Downloads 67695 The Relationship between the Competence Perception of Student and Graduate Nurses and Their Autonomy and Critical Thinking Disposition
Authors: Zülfiye Bıkmaz, Aytolan Yıldırım
Abstract:
This study was planned as a descriptive regressive study in order to determine the relationship between the competency levels of working nurses, the levels of competency expected by nursing students, the critical thinking disposition of nurses, their perceived autonomy levels, and certain socio demographic characteristics. It is also a methodological study with regard to the intercultural adaptation of the Nursing Competence Scale (NCS) in both working and student samples. The sample of the study group of nurses at a university hospital for at least 6 months working properly and consists of 443 people filled out questionnaires. The student group, consisting of 543 individuals from the 4 public university nursing 3rd and 4th grade students. Data collection tools consisted of a questionnaire prepared in order to define the socio demographic, economic, and personal characteristics of the participants, the ‘Nursing Competency Scale’, the ‘Autonomy Subscale of the Sociotropy – Autonomy Scale’, and the ‘California Critical Thinking Disposition Inventory’. In data evaluation, descriptive statistics, nonparametric tests, Rasch analysis and correlation and regression tests were used. The language validity of the ‘NCS’ was performed by translation and back translation, and the context validity of the scale was performed with expert views. The scale, which was formed into its final structure, was applied in a pilot application from a group consisting of graduate and student nurses. The time constancy of the test was obtained by analysis testing retesting method. In order to reduce the time problems with the two half reliability method was used. The Cronbach Alfa coefficient of the scale was found to be 0.980 for the nurse group and 0.986 for the student group. Statistically meaningful relationships between competence and critical thinking and variables such as age, gender, marital status, family structure, having had critical thinking training, education level, class of the students, service worked in, employment style and position, and employment duration were found. Statistically meaningful relationships between autonomy and certain variables of the student group such as year, employment status, decision making style regarding self, total duration of employment, employment style, and education status were found. As a result, it was determined that the NCS which was adapted interculturally was a valid and reliable measurement tool and was found to be associated with autonomy and critical thinking.Keywords: nurse, nursing student, competence, autonomy, critical thinking, Rasch analysis
Procedia PDF Downloads 397694 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer
Procedia PDF Downloads 108693 A Digital Health Approach: Using Electronic Health Records to Evaluate the Cost Benefit of Early Diagnosis of Alpha-1 Antitrypsin Deficiency in the UK
Authors: Sneha Shankar, Orlando Buendia, Will Evans
Abstract:
Alpha-1 antitrypsin deficiency (AATD) is a rare, genetic, and multisystemic condition. Underdiagnosis is common, leading to chronic pulmonary and hepatic complications, increased resource utilization, and additional costs to the healthcare system. Currently, there is limited evidence of the direct medical costs of AATD diagnosis in the UK. This study explores the economic impact of AATD patients during the 3 years before diagnosis and to identify the major cost drivers using primary and secondary care electronic health record (EHR) data. The 3 years before diagnosis time period was chosen based on the ability of our tool to identify patients earlier. The AATD algorithm was created using published disease criteria and applied to 148 known AATD patients’ EHR found in a primary care database of 936,148 patients (413,674 Biobank and 501,188 in a single primary care locality). Among 148 patients, 9 patients were flagged earlier by the tool and, on average, could save 3 (1-6) years per patient. We analysed 101 of the 148 AATD patients’ primary care journey and 20 patients’ Hospital Episode Statistics (HES) data, all of whom had at least 3 years of clinical history in their records before diagnosis. The codes related to laboratory tests, clinical visits, referrals, hospitalization days, day case, and inpatient admissions attributable to AATD were examined in this 3-year period before diagnosis. The average cost per patient was calculated, and the direct medical costs were modelled based on the mean prevalence of 100 AATD patients in a 500,000 population. A deterministic sensitivity analysis (DSA) of 20% was performed to determine the major cost drivers. Cost data was obtained from the NHS National tariff 2020/21, National Schedule of NHS Costs 2018/19, PSSRU 2018/19, and private care tariff. The total direct medical cost of one hundred AATD patients three years before diagnosis in primary and secondary care in the UK was £3,556,489, with an average direct cost per patient of £35,565. A vast majority of this total direct cost (95%) was associated with inpatient admissions (£3,378,229). The DSA determined that the costs associated with tier-2 laboratory tests and inpatient admissions were the greatest contributors to direct costs in primary and secondary care, respectively. This retrospective study shows the role of EHRs in calculating direct medical costs and the potential benefit of new technologies for the early identification of patients with AATD to reduce the economic burden in primary and secondary care in the UK.Keywords: alpha-1 antitrypsin deficiency, costs, digital health, early diagnosis
Procedia PDF Downloads 168692 Analyzing Brand Related Information Disclosure and Brand Value: Further Empirical Evidence
Authors: Yves Alain Ach, Sandra Rmadi Said
Abstract:
An extensive review of literature in relation to brands has shown that little research has focused on the nature and determinants of the information disclosed by companies with respect to the brands they own and use. The objective of this paper is to address this issue. More specifically, the aim is to characterize the nature of the information disclosed by companies in terms of estimating the value of brands and to identify the determinants of that information according to the company’s characteristics most frequently tested by previous studies on the disclosure of information on intangible capital, by studying the practices of a sample of 37 French companies. Our findings suggest that companies prefer to communicate accounting, economic and strategic information in relation to their brands instead of providing financial information. The analysis of the determinants of the information disclosed on brands leads to the conclusion that the groups which operate internationally and have chosen a category 1 auditing firm to communicate more information to investors in their annual report. Our study points out that the sector is not an explanatory variable for voluntary brand disclosure, unlike previous studies on intangible capital. Our study is distinguished by the study of an element that has been little studied in the financial literature, namely the determinants of brand-related information. With regard to the effect of size on brand-related information disclosure, our research does not confirm this link. Many authors point out that large companies tend to publish more voluntary information in order to respond to stakeholder pressure. Our study also establishes that the relationship between brand information supply and performance is insignificant. This relationship is already controversial by previous research, and it shows that higher profitability motivates managers to provide more information, as this strengthens investor confidence and may increase managers' compensation. Our main contribution focuses on the nature of the inherent characteristics of the companies that disclose the most information about brands. Our results show the absence of a link between size and industry on the one hand and the supply of brand information on the other, contrary to previous research. Our analysis highlights three types of information disclosed about brands: accounting, economics and strategy. We, therefore, question the reasons that may lead companies to voluntarily communicate mainly accounting, economic and strategic information in relation to our study from one year to the next and not to communicate detailed information that would allow them to reconstitute the financial value of their brands. Our results can be useful for companies and investors. Our results highlight, to our surprise, the lack of financial information that would allow investors to understand a better valuation of brands. We believe that additional information is needed to improve the quality of accounting and financial information related to brands. The additional information provided in the special report that we recommend could be called a "report on intangible assets”.Keywords: brand related information, brand value, information disclosure, determinants
Procedia PDF Downloads 85691 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour
Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale
Abstract:
Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.Keywords: artificial neural network, back-propagation, tide data, training algorithm
Procedia PDF Downloads 485690 Comparative Study of Outcome of Patients with Wilms Tumor Treated with Upfront Chemotherapy and Upfront Surgery in Alexandria University Hospitals
Authors: Golson Mohamed, Yasmine Gamasy, Khaled EL-Khatib, Anas Al-Natour, Shady Fadel, Haytham Rashwan, Haytham Badawy, Nadia Farghaly
Abstract:
Introduction: Wilm's tumor is the most common malignant renal tumor in children. Much progress has been made in the management of patients with this malignancy over the last 3 decades. Today treatments are based on several trials and studies conducted by the International Society of Pediatric Oncology (SIOP) in Europe and National Wilm's Tumor Study Group (NWTS) in the USA. It is necessary for us to understand why do we follow either of the protocols, NWTS which follows the upfront surgery principle or the SIOP which follows the upfront chemotherapy principle in all stages of the disease. Objective: The aim of is to assess outcome in patients treated with preoperative chemotherapy and patients treated with upfront surgery to compare their effect on overall survival. Study design: to decide which protocol to follow, study was carried out on records for patients aged 1 day to 18 years old suffering from Wilm's tumor who were admitted to Alexandria University Hospital, pediatric oncology, pediatric urology and pediatric surgery departments, with a retrospective survey records from 2010 to 2015, Design and editing of the transfer sheet with a (PRISMA flow study) Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. (11) Qualitative data were described using number and percent. Quantitative data were described using Range (minimum and maximum), mean, standard deviation and median. Comparison between different groups regarding categorical variables was tested using Chi-square test. When more than 20% of the cells have expected count less than 5, correction for chi-square was conducted using Fisher’s Exact test or Monte Carlo correction. The distributions of quantitative variables were tested for normality using Kolmogorov-Smirnov test, Shapiro-Wilk test, and D'Agstino test, if it reveals normal data distribution, parametric tests were applied. If the data were abnormally distributed, non-parametric tests were used. For normally distributed data, a comparison between two independent populations was done using independent t-test. For abnormally distributed data, comparison between two independent populations was done using Mann-Whitney test. Significance of the obtained results was judged at the 5% level. Results: A significantly statistical difference was observed for survival between the two studied groups favoring the upfront chemotherapy(86.4%)as compared to the upfront surgery group (59.3%) where P=0.009. As regard complication, 20 cases (74.1%) out of 27 were complicated in the group of patients treated with upfront surgery. Meanwhile, 30 cases (68.2%) out of 44 had complications in patients treated with upfront chemotherapy. Also, the incidence of intraoperative complication (rupture) was less in upfront chemotherapy group as compared to upfront surgery group. Conclusion: Upfront chemotherapy has superiority over upfront surgery.As the patient who started with upfront chemotherapy shown, higher survival rate, less percent in complication, less percent needed for radiotherapy, and less rate in recurrence.Keywords: Wilm's tumor, renal tumor, chemotherapy, surgery
Procedia PDF Downloads 318689 Effects of Conversion of Indigenous Forest to Plantation Forest on the Diversity of Macro-Fungi in Kereita Forest, Kikuyu Escarpment, Kenya
Authors: Susan Mwai, Mary Muchane, Peter Wachira, Sheila Okoth, Muchai Muchane, Halima Saado
Abstract:
Tropical forests harbor a wide range of biodiversity and rich macro-fungi diversity compared to the temperate regions in the World. However, biodiversity is facing the threat of extinction following the rate of forest loss taking place before proper study and documentation of macrofungi is achieved. The present study was undertaken to determine the effect of converting indigenous habitat to plantation forest on macrofungi diversity. To achieve the objective of this study, an inventory focusing on macro-fungi diversity was conducted within Kereita block in Kikuyu Escarpment forest which is on the southern side of Aberdare mountain range. The macrofungi diversity was conducted in the indigenous forest and in more than 15 year old Patula plantation forest , during the wet (long rain season, December 2014) and dry (Short rain season, May, 2015). In each forest type, 15 permanent (20m x 20m) sampling plots distributed across three (3) forest blocks were used. Both field and laboratory methods involved recording abundance of fruiting bodies, taxonomic identity of species and analysis of diversity indices and measures in terms of species richness, density and diversity. R statistical program was used to analyze for species diversity and Canoco 4.5 software for species composition. A total number of 76 genera in 28 families and 224 species were encountered in both forest types. The most represented taxa belonged to the Agaricaceae (16%), Polyporaceae (12%), Marasmiaceae, Mycenaceae (7%) families respectively. Most of the recorded macro-fungi were saprophytic, mostly colonizing the litter 38% and wood 34% based substrates, which was followed by soil organic dwelling species (17%). Ecto-mycorrhiza fungi (5%) and parasitic fungi (2%) were the least encountered. The data established that indigenous forests (native ecosystems) hosts a wide range of macrofungi assemblage in terms of density (2.6 individual fruit bodies / m2), species richness (8.3 species / plot) and species diversity (1.49/ plot level) compared to the plantation forest. The Conversion of native forest to plantation forest also interfered with species composition though did not alter species diversity. Seasonality was also shown to significantly affect the diversity of macro-fungi and 61% of the total species being present during the wet season. Based on the present findings, forested ecosystems in Kenya hold diverse macro-fungi community which warrants conservation measures.Keywords: diversity, Indigenous forest, macro-fungi, plantation forest, season
Procedia PDF Downloads 214688 Cai Guo-Qiang: A Chinese Artist at the Cutting-Edge of Global Art
Authors: Marta Blavia
Abstract:
Magiciens de la terre, organized in 1989 by the Centre Pompidou, became 'the first worldwide exhibition of contemporary art' by presenting artists from Western and non-Western countries, including three Chinese artists. For the first time, West turned its eyes to other countries not as exotic sources of inspiration, but as places where contemporary art was also being created. One year later, Chine: demain pour hier was inaugurated as the first Chinese avant-garde group-exhibition in Occident. Among the artists included was Cai Guo-Qiang who, like many other Chinese artists, had left his home country in the eighties in pursuit of greater creative freedom. By exploring artistic non-Western perspectives, both landmark exhibitions questioned the predominance of the Eurocentric vision in the construction of history art. But more than anything else, these exhibitions laid the groundwork for the rise of the so-called phenomenon 'global contemporary art'. All the same time, 1989 also was a turning point in Chinese art history. Because of the Tiananmen student protests, The Chinese government undertook a series of measures to cut down any kind of avant-garde artistic activity after a decade of a relative openness. During the eighties, and especially after the Tiananmen crackdown, some important artists began to leave China to move overseas such as Xu Bing and Ai Weiwei (USA); Chen Zhen and Huang Yong Ping (France); or Cai Guo-Qiang (Japan). After emigrating abroad, Chinese overseas artists began to develop projects in accordance with their new environments and audiences as well as to appear in numerous international exhibitions. With their creations, that moved freely between a variety of Eastern and Western art sources, these artists were crucial agents in the emergence of global contemporary art. As other Chinese artists overseas, Cai Guo-Qiang’s career took off during the 1990s and early 2000s right at the same moment in which Western art world started to look beyond itself. Little by little, he developed a very personal artistic language that redefines Chinese ideas, symbols, and traditional materials in a new world order marked by globalization. Cai Guo-Qiang participated in many of the exhibitions that contributed to shape global contemporary art: Encountering the Others (1992); the 45th Venice Biennale (1993); Inside Out: New Chinese Art (1997), or the 48th Venice Biennale (1999), where he recreated the Chinese monumental social realist work Rent Collection Courtyard that earned him the Golden Lion Award. By examining the different stages of Cai Guo-Qiang’s artistic path as well as the transnational dimensions of his creations, this paper aims at offering a comprehensive survey on the construction of the discourse of global contemporary art.Keywords: Cai Guo-Qiang, Chinese artists overseas, emergence global art, transnational art
Procedia PDF Downloads 285687 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado
Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha
Abstract:
The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.Keywords: low-order streams, metabolism, net primary production, trophic state
Procedia PDF Downloads 259686 Forced Immigration to Turkey: The Socio-Spatial Impacts of Syrian Immigrants on Turkish Cities
Authors: Tolga Levent
Abstract:
Throughout the past few decades, forced immigration has been a significant problem for many developing countries. Turkey is one of those countries, which has experienced lots of forced immigration waves in the Republican era. However, the ongoing forced immigration wave of Syrians started with Syrian Civil War in 2011, is strikingly influential due to its intensity. In six years, approximately 3,4 million Syrians have entered to Turkey and presented high-level spatial concentrations in certain cities proximate to the Syrian border. These concentrations make Syrians and their problems relatively visible, especially in those cities. The problems of Syrians in Turkish cities could be associated with all dimensions of daily lives. Within economical dimension, high rates of Syrian unemployment push them to informal jobs offering very low wages. The financial aids they continuously demand from public authorities trigger anti-Syrian behaviors of local communities. Moreover, their relatively limited social adaptation capacities increase integration problems within social dimension day by day. Even, there are problems related to public health dimension such as the reappearance of certain child's illnesses due to the insufficiency of vaccination of Syrian children. These problems are significant but relatively easy to be prevented by using different types of management strategies and structural policies. However, there are other types of problems -urban problems- emerging with socio-spatial impacts of Syrians on Turkish cities in a very short period of time. There are relatively limited amount of studies about these impacts since they are difficult to be comprehended. The aim of the study, in this respect, is to understand these rapidly-emerging impacts and urban problems resulted from this massive immigration influx and to discuss new qualities of urban planning facing them. In the first part, there is a brief historical consideration of forced immigration waves in Turkey. These waves are important to make comparison with the ongoing immigration wave and to understand its significance. The second part is about quantitative and qualitative analyses of the spatial existence of Syrian immigrants in the city of Mersin, as an example of cities where Syrians are highly concentrated. By using official data from public authorities, quantitative statistical analyses are made to detect spatial concentrations of Syrians at neighborhood level. As methods of qualitative research, observations and in-depth interviews are used to define socio-spatial impacts of Syrians. The main results show that there emerges 'cities in cities' though sharp socio-spatial segregations which change density surfaces; produce unforeseen land-use patterns; result in inadequacies of public services and create degradations/deteriorations of urban environments occupied by Syrians. All these problems are significant; however, Turkish planning system does not have a capacity to cope with them. In the final part, there is a discussion about new qualities of urban planning facing these impacts and urban problems. The main point of discussion is the possibility of resilient urban planning under the conditions of uncertainty and unpredictability fostered by immigration crisis. Such a resilient planning approach might provide an option for countries aiming to cope with negative socio-spatial impacts of massive immigration influxes.Keywords: cities, forced immigration, Syrians, urban planning
Procedia PDF Downloads 257685 An Energy Integration Study While Utilizing Heat of Flue Gas: Sponge Iron Process
Authors: Venkata Ramanaiah, Shabina Khanam
Abstract:
Enormous potential for saving energy is available in coal-based sponge iron plants as these are associated with the high percentage of energy wastage per unit sponge iron production. An energy integration option is proposed, in the present paper, to a coal based sponge iron plant of 100 tonnes per day production capacity, being operated in India using SL/RN (Stelco-Lurgi/Republic Steel-National Lead) process. It consists of the rotary kiln, rotary cooler, dust settling chamber, after burning chamber, evaporating cooler, electrostatic precipitator (ESP), wet scrapper and chimney as important equipment. Principles of process integration are used in the proposed option. It accounts for preheating kiln inlet streams like kiln feed and slinger coal up to 170ᴼC using waste gas exiting ESP. Further, kiln outlet stream is cooled from 1020ᴼC to 110ᴼC using kiln air. The working areas in the plant where energy is being lost and can be conserved are identified. Detailed material and energy balances are carried out around the sponge iron plant, and a modified model is developed, to find coal requirement of proposed option, based on hot utility, heat of reactions, kiln feed and air preheating, radiation losses, dolomite decomposition, the heat required to vaporize the coal volatiles, etc. As coal is used as utility and process stream, an iterative approach is used in solution methodology to compute coal consumption. Further, water consumption, operating cost, capital investment, waste gas generation, profit, and payback period of the modification are computed. Along with these, operational aspects of the proposed design are also discussed. To recover and integrate waste heat available in the plant, three gas-solid heat exchangers and four insulated ducts with one FD fan for each are installed additionally. Thus, the proposed option requires total capital investment of $0.84 million. Preheating of kiln feed, slinger coal and kiln air streams reduce coal consumption by 24.63% which in turn reduces waste gas generation by 25.2% in comparison to the existing process. Moreover, 96% reduction in water is also observed, which is the added advantage of the modification. Consequently, total profit is found as $2.06 million/year with payback period of 4.97 months only. The energy efficient factor (EEF), which is the % of the maximum energy that can be saved through design, is found to be 56.7%. Results of the proposed option are also compared with literature and found in good agreement.Keywords: coal consumption, energy conservation, process integration, sponge iron plant
Procedia PDF Downloads 144684 The Debureaucratization Strategy for the Portuguese Health Service through Effective Communication
Authors: Fernando Araujo, Sandra Cardoso, Fátima Fonseca, Sandra Cavaca
Abstract:
A debureaucratization strategy for the Portuguese Health Service was assumed by the Executive Board of the SNS, in deep articulation with the Shared Services of the Ministry of Health. Two of the main dimensions were focused on sick leaves (SL), that transform primary health care (PHC) in administrative institutions, limiting access to patients. The self-declaration of illness (SDI) project, through the National Health Service Contact Centre (SNS24), began on May 1, 2023, and has already resulted in the issuance of more than 300,000 SDI without the need to allocate resources from the National Health Service (NHS). This political decision allows each citizen, in a maximum 2 times/year, and 3 days each time, if ill, through their own responsibility, report their health condition in a dematerialized way, and by this way justified the absence to work, although by Portuguese law in these first three days, there is no payment of salary. Using a digital approach, it is now feasible without the need to go to the PHC and occupy the time of the PHC only to obtain an SL. Through this measure, bureaucracy has been reduced, and the system has been focused on users, improving the lives of citizens and reducing the administrative burden on PHC, which now has more consultation times for users who need it. The second initiative, which began on March 1, 2024, allows the SL to be issued in emergency departments (ED) of public hospitals and in the health institutions of the social and private sectors. This project is intended to allow the user who has suffered a situation of acute urgent illness and who has been observed in an ED of a public hospital or in a private or social entity no longer need to go to PHC only to apply for the respective SL. Since March 1, 54,453 SLs have been issued, 242 in private or social sector institutions and 6,918 in public hospitals, of which 134 were in ED and 47,292 in PHC. This approach has proven to be technically robust, allows immediate resolution of problems and differentiates the performance of doctors. However, it is important to continue to qualify the proper functioning of the ED, preventing non-urgent users from going there only to obtain SL. Thus, in order to make better use of existing resources, it was operationalizing this extension of its issuance in a balanced way, allowing SL to be issued in the ED of hospitals only to critically ill patients or patients referred by INEM, SNS24, or PHC. In both cases, an intense public campaign was implemented to explain the way it works and the benefits for patients. In satisfaction surveys, more than 95% of patients and doctors were satisfied with the solutions, asking for extensions to other areas. The administrative simplification agenda of the NHS continues its effective development. For the success of this debureaucratization agenda, the key factors are effective communication and the ability to reach patients and health professionals in order to increase health literacy and the correct use of NHS.Keywords: debureaucratization strategy, self-declaration of illness, sick leaves, SNS24
Procedia PDF Downloads 73683 Assessing the Spatial Distribution of Urban Parks Using Remote Sensing and Geographic Information Systems Techniques
Authors: Hira Jabbar, Tanzeel-Ur Rehman
Abstract:
Urban parks and open spaces play a significant role in improving physical and mental health of the citizens, strengthen the societies and make the cities more attractive places to live and work. As the world’s cities continue to grow, continuing to value green space in cities is vital but is also a challenge, particularly in developing countries where there is pressure for space, resources, and development. Offering equal opportunity of accessibility to parks is one of the important issues of park distribution. The distribution of parks should allow all inhabitants to have close proximity to their residence. Remote sensing and Geographic information systems (GIS) can provide decision makers with enormous opportunities to improve the planning and management of Park facilities. This study exhibits the capability of GIS and RS techniques to provide baseline knowledge about the distribution of parks, level of accessibility and to help in identification of potential areas for such facilities. For this purpose Landsat OLI imagery for year 2016 was acquired from USGS Earth Explorer. Preprocessing models were applied using Erdas Imagine 2014v for the atmospheric correction and NDVI model was developed and applied to quantify the land use/land cover classes including built up, barren land, water, and vegetation. The parks amongst total public green spaces were selected based on their signature in remote sensing image and distribution. Percentages of total green and parks green were calculated for each town of Lahore City and results were then synchronized with the recommended standards. ANGSt model was applied to calculate the accessibility from parks. Service area analysis was performed using Network Analyst tool. Serviceability of these parks has been evaluated by employing statistical indices like service area, service population and park area per capita. Findings of the study may contribute in helping the town planners for understanding the distribution of parks, demands for new parks and potential areas which are deprived of parks. The purpose of present study is to provide necessary information to planners, policy makers and scientific researchers in the process of decision making for the management and improvement of urban parks.Keywords: accessible natural green space standards (ANGSt), geographic information systems (GIS), remote sensing (RS), United States geological survey (USGS)
Procedia PDF Downloads 342682 The Risk of Bleeding in Knee or Shoulder Injections in Patients on Warfarin Treatment
Authors: Muhammad Yasir Tarar
Abstract:
Background: Intraarticular steroid injections are an effective option in alleviating the symptoms of conditions like osteoarthritis, rheumatoid arthritis, crystal arthropathy, and rotator cuff tendinopathy. Most of these injections are conducted in the elderly who are on polypharmacy, including anticoagulants at times. Up to 6% of patients aged 80-84 years have been reported to be taking Warfarin. The literature availability on safety quotient for patients undergoing intraarticular injections on Warfarin is scarce. It has remained debatable over the years which approach is safe for these patients. Continuing warfarin has a theoretical bleeding risk, and stopping it can lead to even severe life-threatening thromboembolic events in high-risk patients. Objectives: To evaluate the risk of bleeding complications in patients on warfarin undergoing intraarticular injections or arthrocentesis. Study Design & Methods: A literature search of MEDLINE (1946 to present), EMBASE (1974 to present), and Cochrane CENTRAL (1988 to present) databases were conducted using any combination of the keywords, Injection, Knee, Shoulder, Joint, Intraarticular, arthrocentesis, Warfarin, and Anticoagulation in November 2020 for articles published in any language with no publication year limit. The study inclusion criteria included reporting on the rate of bleeding complications following injection of the knee or shoulder in patients on warfarin treatment. Randomized control trials and prospective and retrospective study designs were included. An electronic standardized Performa for data extraction was made. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) the methodology was used. The articles were appraised using the methodological index for nonrandomized studies. The Cochrane Risk of Bias Tool used to assess the risk of bias in included RCTs and the MINORS tool for assessment of bias in observational studies. Results: The search of databases resulted in a total of 852 articles. Relevant articles as per the inclusion criteria were shortlisted, 7 articles deemed suitable to be include. A total of 1033 joints sample size was undertaken with specified knee and shoulder joints of a total of 820. Only 6 joints had bleeding complications, 5 early bleeding at the time of injection or aspiration, and one late bleeding complication with INR of 5, additionally, 2 patients complained of bruising, 3 of pain, and 1 managed for infection. Conclusions: The results of the metanalysis show that it is relatively safe to perform intraarticular injections in patients on Warfarin regardless of the INR range.Keywords: arthrocentesis, warfarin, bleeding, injection
Procedia PDF Downloads 77681 The Incident of Concussion across Popular American Youth Sports: A Retrospective Review
Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin H. McCleery
Abstract:
Introduction: A leading cause of emergency room visits among youth (in the United States), is sports-related traumatic brain injuries. Mild traumatic brain injuries (mTBIs), also called concussions, are caused by linear and/or angular acceleration experienced at the head and represent an increasing societal burden. Due to the developing nature of the brain in youth, there is a great risk for long-term neuropsychological deficiencies following a concussion. Accordingly, the purpose of this paper is to investigate incidence rates of concussion across gender for the five most common youth sports in the United States. These include basketball, track and field, soccer, baseball (boys), softball (girls), football (boys), and volleyball (girls). Methods: A PubMed search was performed for four search themes combined. The first theme identified the outcomes (concussion, brain injuries, mild traumatic brain injury, etc.). The second theme identified the sport (American football, soccer, basketball, softball, volleyball, track, and field, etc.). The third theme identified the population (adolescence, children, youth, boys, girls). The last theme identified the study design (prevalence, frequency, incidence, prospective). Ultimately, 473 studies were surveyed, with 15 fulfilling the criteria: prospective study presenting original data and incidence of concussion in the relevant youth sport. The following data were extracted from the selected studies: population age, total study population, total athletic exposures (AE) and incidence rate per 1000 athletic exposures (IR/1000). Two One-Way ANOVA and a Tukey’s post hoc test were conducted using SPSS. Results: From the 15 selected studies, statistical analysis revealed the incidence of concussion per 1000 AEs across the considered sports ranged from 0.014 (girl’s track and field) to 0.780 (boy’s football). Average IR/1000 across all sports was 0.483 and 0.268 for boys and girls, respectively; this difference in IR was found to be statistically significant (p=0.013). Tukey’s post hoc test showed that football had significantly higher IR/1000 than boys’ basketball (p=0.022), soccer (p=0.033) and track and field (p=0.026). No statistical difference was found for concussion incidence between girls’ sports. Removal of football was found to lower the IR/1000 for boys without a statistical difference (p=0.101) compared to girls. Discussion: Football was the only sport showing a statistically significant difference in concussion incidence rate relative to other sports (within gender). Males were overall more likely to be concussed than females when football was included (1.8x), whereas concussion was more likely for females when football was excluded. While the significantly higher rate of concussion in football is not surprising because of the nature and rules of the sport, it is concerning that research has shown higher incidence of concussion in practices than games. Interestingly, findings indicate that girls’ sports are more concussive overall when football is removed. This appears to counter the common notion that boys’ sports are more physically taxing and dangerous. Future research should focus on understanding the concussive mechanisms of injury in each sport to enable effective rule changes.Keywords: gender, football, soccer, traumatic brain injury
Procedia PDF Downloads 143680 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon
Authors: Jeffrey A. Amelse
Abstract:
Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.Keywords: carbon dioxide, net zero, sequestration, biomass, leaves
Procedia PDF Downloads 130