Search results for: principal component
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3088

Search results for: principal component

568 The Power of Inferences and Assumptions: Using a Humanities Education Approach to Help Students Learn to Think Critically

Authors: Randall E. Osborne

Abstract:

A four-step ‘humanities’ thought model has been used in an interdisciplinary course for almost two decades and has been proven to aid in student abilities to become more inclusive in their world view. Lack of tolerance for ambiguity can interfere with this progression so we developed an assignment that seems to have assisted students in developing more tolerance for ambiguity and, therefore, opened them up to make more progress on the critical thought model. A four-step critical thought model (built from a humanities education approach) is used in an interdisciplinary course on prejudice, discrimination, and hate in an effort to minimize egocentrism and promote sociocentrism in college students. A fundamental barrier to this progression is a lack of tolerance for ambiguity. The approach to the course is built on the assumption that Tolerance for Ambiguity (characterized by a dislike of uncertain, ambiguous or situations in which expected behaviors are uncertain, will like serve as a barrier (if tolerance is low) or facilitator (if tolerance is high) of active ‘engagement’ with assignments. Given that active engagement with course assignments would be necessary to promote an increase in critical thought and the degree of multicultural attitude change, tolerance for ambiguity inhibits critical thinking and, ultimately multicultural attitude change. As expected, those students showing the least amount of decrease (or even an increase) in intolerance across the semester, earned lower grades in the course than those students who showed a significant decrease in intolerance, t(1,19) = 4.659, p < .001. Students who demonstrated the most change in their Tolerance for Ambiguity (showed an increasing ability to tolerate ambiguity) earned the highest grades in the course. This is, especially, significant because faculty did not know student scores on this measure until after all assignments had been graded and course grades assigned. An assignment designed to assist students in making their assumption and inferences processes visible so they could be explored, was implemented with the goal of this exploration then promoting more tolerance for ambiguity, which, as already outlined, promotes critical thought. The assignment offers students two options and then requires them to explore what they have learned about inferences and/or assumptions This presentation outlines the assignment and demonstrates the humanities model, what students learn from particular assignments and how it fosters a change in Tolerance for Ambiguity which, serves as the foundational component of critical thinking.

Keywords: critical thinking, humanities education, sociocentrism, tolerance for ambiguity

Procedia PDF Downloads 270
567 Exploring Empathy Through Patients’ Eyes: A Thematic Narrative Analysis of Patient Narratives in the UK

Authors: Qudsiya Baig

Abstract:

Empathy yields an unparalleled therapeutic value within patient physician interactions. Medical research is inundated with evidence to support that a physician’s ability to empathise with patients leads to a greater willingness to report symptoms, an improvement in diagnostic accuracy and safety, and a better adherence and satisfaction with treatment plans. Furthermore, the Institute of Medicine states that empathy leads to a more patient-centred care, which is one of the six main goals of a 21st century health system. However, there is a paradox between the theoretical significance of empathy and its presence, or lack thereof, in clinical practice. Recent studies have reported that empathy declines amongst students and physicians over time. The three most impactful contributors to this decline are: (1) disagreements over the definitions of empathy making it difficult to implement it into practice (2) poor consideration or regulation of empathy leading to burnout and thus, abandonment altogether, and (3) the lack of diversity in the curriculum and the influence of medical culture, which prioritises science over patient experience, limiting some physicians from using ‘too much’ empathy in the fear of losing clinical objectivity. These issues were investigated by conducting a fully inductive thematic narrative analysis of patient narratives in the UK to evaluate the behaviours and attitudes that patients associate with empathy. The principal enquiries underpinning this study included uncovering the factors that affected experience of empathy within provider-patient interactions and to analyse their effects on patient care. This research contributes uniquely to this discourse by examining the phenomenon of empathy directly from patients’ experiences, which were systematically extracted from a repository of online patient narratives of care titled ‘CareOpinion UK’. Narrative analysis was specifically chosen as the methodology to examine narratives from a phenomenological lens to focus on the particularity and context of each story. By enquiring beyond the superficial who-whatwhere, the study of narratives prescribed meaning to illness by highlighting the everyday reality of patients who face the exigent life circumstances created by suffering, disability, and the threat of life. The following six themes were found to be the most impactful in influencing the experience of empathy: dismissive behaviours, judgmental attitudes, undermining patients’ pain or concerns, holistic care and failures and successes of communication or language. For each theme there were overarching themes relating to either a failure to understand the patient’s perspective or a success in taking a person-centred approach. An in-depth analysis revealed that a lack of empathy was greatly associated with an emotive-cognitive imbalance, which disengaged physicians with their patients’ emotions. This study hereby concludes that competent providers require a combination of knowledge, skills, and more importantly empathic attitudes to help create a context for effective care. The crucial elements of that context involve (a) identifying empathy clues within interactions to engage with patients’ situations, (b) attributing a perspective to the patient through perspective-taking and (c) adapting behaviour and communication according to patient’s individual needs. Empathy underpins that context, as does an appreciation of narrative, and the two are interrelated.

Keywords: empathy, narratives, person-centred, perspective, perspective-taking

Procedia PDF Downloads 131
566 The Effectiveness and the Factors Affect Farmer’s Adoption of Technological Innovation Citrus Gerga Lebong in Bengkulu Indonesia

Authors: Umi Pudji Astuti, Dedi Sugandi

Abstract:

The effectiveness of agricultural extension is determined by the component in the agricultural extension system among others are agricultural extension methods. Effective methods should be selected and defined based on the characteristics of the target, the resources, the materials, and the objectives to be achieved. Citrus agribusiness development in Lebong is certainly supported by the role of stakeholders and citrus farmers, as well as the proper dissemination methods. Adoption in the extension process substantially can be interpreted as the changes of behavior process such as knowledge (cognitive), attitudes (affective), and skill (psycho-motoric) in a person after receiving "innovation" from extension submitted by target communities. Knowledge and perception are needed as a first step in adopting a innovation, especially of citrus agribusiness development in Lebong. The process of Specific technology adoption is influenced by internal factors and farmer perceptions of technological innovation. Internal factors such as formal education, experience trying to farm, owned land, production farm goods. The output of this study: 1) to analyze the effectiveness of field trial methods in improving cognitive and affective farmers; 2) Knowing the relationship of adoption level and knowledge of farmers; 3) to analyze the factors that influence farmers' adoption of citrus technology innovation. The method of this study is through the survey to 40 respondents in Rimbo Pengadang Sub District, Lebong District in 2014. Analyzing data is done by descriptive and statistical parametric (multiple linear functions). The results showed that: 1) Field trip method is effective to improve the farmer knowledge (23,17% ) and positively affect the farmer attitude; 2) the knowledge level of PTKJS innovation farmers "positively and very closely related".; 3) the factors that influence the level of farmers' adoption are internal factors (education, knowledge, and the intensity of training), and external factors respondents (distance from the house to the garden and from the house to production facilities shop).

Keywords: affect, adoption technology, citrus gerga, effectiveness dissemination

Procedia PDF Downloads 184
565 Effect of Curing Temperature on the Textural and Rheological of Gelatine-SDS Hydrogels

Authors: Virginia Martin Torrejon, Binjie Wu

Abstract:

Gelatine is a protein biopolymer obtained from the partial hydrolysis of animal tissues which contain collagen, the primary structural component in connective tissue. Gelatine hydrogels have attracted considerable research in recent years as an alternative to synthetic materials due to their outstanding gelling properties, biocompatibility and compostability. Surfactants, such as sodium dodecyl sulfate (SDS), are often used in hydrogels solutions as surface modifiers or solubility enhancers, and their incorporation can influence the hydrogel’s viscoelastic properties and, in turn, its processing and applications. Literature usually focuses on studying the impact of formulation parameters (e.g., gelatine content, gelatine strength, additives incorporation) on gelatine hydrogels properties, but processing parameters, such as curing temperature, are commonly overlooked. For example, some authors have reported a decrease in gel strength at lower curing temperatures, but there is a lack of research on systematic viscoelastic characterisation of high strength gelatine and gelatine-SDS systems at a wide range of curing temperatures. This knowledge is essential to meet and adjust the technological requirements for different applications (e.g., viscosity, setting time, gel strength or melting/gelling temperature). This work investigated the effect of curing temperature (10, 15, 20, 23 and 25 and 30°C) on the elastic modulus (G’) and melting temperature of high strength gelatine-SDS hydrogels, at 10 wt% and 20 wt% gelatine contents, by small-amplitude oscillatory shear rheology coupled with Fourier Transform Infrared Spectroscopy. It also correlates the gel strength obtained by rheological measurements with the gel strength measured by texture analysis. Gelatine and gelatine-SDS hydrogels’ rheological behaviour strongly depended on the curing temperature, and its gel strength and melting temperature can be slightly modified to adjust it to given processing and applications needs. Lower curing temperatures led to gelatine and gelatine-SDS hydrogels with considerably higher storage modulus. However, their melting temperature was lower than those gels cured at higher temperatures and lower gel strength. This effect was more considerable at longer timescales. This behaviour is attributed to the development of thermal-resistant structures in the lower strength gels cured at higher temperatures.

Keywords: gelatine gelation kinetics, gelatine-SDS interactions, gelatine-surfactant hydrogels, melting and gelling temperature of gelatine gels, rheology of gelatine hydrogels

Procedia PDF Downloads 95
564 Art History as Inspiration for Chefs. An Autoethnographic Research About Art History Education in a Restaurant

Authors: Marta Merkl

Abstract:

The ongoing project what the paper will present is about how the author introduces chefs to the history of art through a selected piece of art. The author is originally an art historian, but since 2019 she has been working on her PhD research topic related to designing dining experiences in the restaurant context, including the role of sensory experiences and storytelling. Due to a scholarship, she can participate in the re-design of a fine dining restaurant called Onyx in Budapest, which was awarded two Michelin stars before the pandemic caused by COVID-19. The management of the restaurant wants to broaden the chefs' horizons and develop their creativity by introducing them to each chapter of the visual arts. There is a kind of polyphony in the mass of information about what should a chef, a food designer, or anybody who make food in everyday basis use as a source of inspiration for inventing and preparing new dishes: nostalgia, raw material, cookbooks, etc. In today's world of fine dining, nature is the main inspiration for outstanding achievements, as exemplified by the Slovenian restaurant Hiša Franko** and its chef Ana Roš. The starting point for the project and the research was the idea of using art history as an inspiration for gastronomy. The research relies on data collection via interviews, ethnography, and autoethnography. In this case, the reflective introspection of the researcher is also relevant because the researcher is an important part of the process (GOULD, 1995). The paper overviews the findings of the autoethnography literature relevant to our topic. In the literature review, it will be also pointed out that sustainability, eating as an experience, and the world of art can be linked. As ERDMANN and co-authors (1999) argues that the health dimension of sustainability has a component called 'joy of eating,' which implies strong ties to the experiential nature of eating. Therefore, it is worth to compare with PINE and GILMORE's (1998) theory of experience economy and with CSÍKSZENTMIHÁLYI's (1999) concept of flow, which give examples of gastronomy and art. The aim of the research is to map experiences of the pilot project, the discourse between the art world and the gastronomy actors. Another noteworthy aspect is whether the chefs are willing to use art history as an inspiration.

Keywords: art history, autoethnography, chef, education, experience, food preparation, inspiration, sustainability

Procedia PDF Downloads 142
563 Algorithm for Predicting Cognitive Exertion and Cognitive Fatigue Using a Portable EEG Headset for Concussion Rehabilitation

Authors: Lou J. Pino, Mark Campbell, Matthew J. Kennedy, Ashleigh C. Kennedy

Abstract:

A concussion is complex and nuanced, with cognitive rest being a key component of recovery. Cognitive overexertion during rehabilitation from a concussion is associated with delayed recovery. However, daily living imposes cognitive demands that may be unavoidable and difficult to quantify. Therefore, a portable tool capable of alerting patients before cognitive overexertion occurs could allow patients to maintain their quality of life while preventing symptoms and recovery setbacks. EEG allows for a sensitive measure of cognitive exertion. Clinical 32-lead EEG headsets are not practical for day-to-day concussion rehabilitation management. However, there are now commercially available and affordable portable EEG headsets. Thus, these headsets can potentially be used to continuously monitor cognitive exertion during mental tasks to alert the wearer of overexertion, with the aim of preventing the occurrence of symptoms to speed recovery times. The objective of this study was to test an algorithm for predicting cognitive exertion from EEG data collected from a portable headset. EEG data were acquired from 10 participants (5 males, 5 females). Each participant wore a portable 4 channel EEG headband while completing 10 tasks: rest (eyes closed), rest (eyes open), three levels of the increasing difficulty of logic puzzles, three levels of increasing difficulty in multiplication questions, rest (eyes open), and rest (eyes closed). After each task, the participant was asked to report their perceived level of cognitive exertion using the NASA Task Load Index (TLX). Each participant then completed a second session on a different day. A customized machine learning model was created using data from the first session. The performance of each model was then tested using data from the second session. The mean correlation coefficient between TLX scores and predicted cognitive exertion was 0.75 ± 0.16. The results support the efficacy of the algorithm for predicting cognitive exertion. This demonstrates that the algorithms developed in this study used with portable EEG devices have the potential to aid in the concussion recovery process by monitoring and warning patients of cognitive overexertion. Preventing cognitive overexertion during recovery may reduce the number of symptoms a patient experiences and may help speed the recovery process.

Keywords: cognitive activity, EEG, machine learning, personalized recovery

Procedia PDF Downloads 217
562 A Novel Application of CORDYCEPIN (Cordycepssinensis Extract): Maintaining Stem Cell Pluripotency and Improving iPS Generation Efficiency

Authors: Shih-Ping Liu, Cheng-Hsuan Chang, Yu-Chuen Huang, Shih-Yin Chen, Woei-Cherng Shyu

Abstract:

Embryonic stem cells (ES) and induced pluripotnet stem cells (iPS) are both pluripotent stem cells. For mouse stem cells culture technology, leukemia inhibitory factor (LIF) was used to maintain the pluripotency of stem cells in vitro. However, LIF is an expensive reagent. The goal of this study was to find out a pure compound extracted from Chinese herbal medicine that could maintain stem cells pluripotency to replace LIF and improve the iPS generation efficiency. From 20 candidates traditional Chinese medicine we found that Cordycepsmilitaris triggered the up-regulation of stem cells activating genes (Oct4 and Sox2) expression levels in MEF cells. Cordycepin, a major active component of Cordycepsmilitaris, also could up-regulate Oct4 and Sox2 gene expression. Furthermore, we used ES and iPS cells and treated them with different concentrations of Cordycepin (replaced LIF in the culture medium) to test whether it was useful to maintain the pluripotency. The results showed higher expression levels of several stem cells markers in 10 μM Cordycepin-treated ES and iPS cells compared to controls that did not contain LIF, including alkaline phosphatase, SSEA1, and Nanog. Embryonic body formation and differentiation confirmed that 10 μM Cordycepin-containing medium was capable to maintain stem cells pluripotency after four times passages. For mechanism analysis, microarray analysis indicated extracellular matrix and Jak/Stat signaling pathway as the top two deregulated pathways. In ECM pathway, we determined that the integrin αVβ5 expression levels and phosphorylated Src levels increased after Cordycepin treatment. In addition, the phosphorylated Jak2 and phosphorylated Sat3 protein levels were increased after Cordycepin treatment and suppressed with the Jak2 inhibitor, AG490. The expression of cytokines associated with Jak2/Stat3 signaling pathway were also up-regulated by Q-PCR and ELISA assay. Lastly, we used Oct4-GFP MEF cells to test iPS generation efficiency following Cordycepin treatment. We observed that 10 Μm Cordycepin significantly increased the iPS generation efficiency in day 21. In conclusion, we demonstrated Cordycepin could maintain the pluripotency of stem cells through both of ECM and Jak2/Stat3 signaling pathway and improved iPS generation efficiency.

Keywords: cordycepin, iPS cells, Jak2/Stat3 signaling pathway, molecular biology

Procedia PDF Downloads 435
561 Different Approaches to Teaching a Database Course to Undergraduate and Graduate Students

Authors: Samah Senbel

Abstract:

Database Design is a fundamental part of the Computer Science and Information technology curricula in any school, as well as in the study of management, business administration, and data analytics. In this study, we compare the performance of two groups of students studying the same database design and implementation course at Sacred Heart University in the fall of 2018. Both courses used the same textbook and were taught by the same professor, one for seven graduate students and one for 26 undergraduate students (juniors). The undergraduate students were aged around 20 years old with little work experience, while the graduate students averaged 35 years old and all were employed in computer-related or management-related jobs. The textbook used was 'Database Systems, Design, Implementation, and Management' by Coronel and Morris, and the course was designed to follow the textbook roughly a chapter per week. The first 6 weeks covered the design aspect of a database, followed by a paper exam. The next 6 weeks covered the implementation aspect of the database using SQL followed by a lab exam. Since the undergraduate students are on a 16 week semester, we spend the last three weeks of the course covering NoSQL. This part of the course was not included in this study. After the course was over, we analyze the results of the two groups of students. An interesting discrepancy was observed: In the database design part of the course, the average grade of the graduate students was 92%, while that of the undergraduate students was 77% for the same exam. In the implementation part of the course, we observe the opposite: the average grade of the graduate students was 65% while that of the undergraduate students was 73%. The overall grades were quite similar: the graduate average was 78% and that of the undergraduates was 75%. Based on these results, we concluded that having both classes follow the same time schedule was not beneficial, and an adjustment is needed. The graduates could spend less time on design and the undergraduates would benefit from more design time. In the fall of 2019, 30 students registered for the undergraduate course and 15 students registered for the graduate course. To test our conclusion, the undergraduates spend about 67% of time (eight classes) on the design part of the course and 33% (four classes) on the implementation part, using the exact exams as the previous year. This resulted in an improvement in their average grades on the design part from 77% to 83% and also their implementation average grade from 73% to 79%. In conclusion, we recommend using two separate schedules for teaching the database design course. For undergraduate students, it is important to spend more time on the design part rather than the implementation part of the course. While for the older graduate students, we recommend spending more time on the implementation part, as it seems that is the part they struggle with, even though they have a higher understanding of the design component of databases.

Keywords: computer science education, database design, graduate and undergraduate students, pedagogy

Procedia PDF Downloads 116
560 The Persistence of Abnormal Return on Assets: An Exploratory Analysis of the Differences between Industries and Differences between Firms by Country and Sector

Authors: José Luis Gallizo, Pilar Gargallo, Ramon Saladrigues, Manuel Salvador

Abstract:

This study offers an exploratory statistical analysis of the persistence of annual profits across a sample of firms from different European Union (EU) countries. To this end, a hierarchical Bayesian dynamic model has been used which enables the annual behaviour of those profits to be broken down into a permanent structural and a transitory component, while also distinguishing between general effects affecting the industry as a whole to which each firm belongs and specific effects affecting each firm in particular. This breakdown enables the relative importance of those fundamental components to be more accurately evaluated by country and sector. Furthermore, Bayesian approach allows for testing different hypotheses about the homogeneity of the behaviour of the above components with respect to the sector and the country where the firm develops its activity. The data analysed come from a sample of 23,293 firms in EU countries selected from the AMADEUS data-base. The period analysed ran from 1999 to 2007 and 21 sectors were analysed, chosen in such a way that there was a sufficiently large number of firms in each country sector combination for the industry effects to be estimated accurately enough for meaningful comparisons to be made by sector and country. The analysis has been conducted by sector and by country from a Bayesian perspective, thus making the study more flexible and realistic since the estimates obtained do not depend on asymptotic results. In general terms, the study finds that, although the industry effects are significant, more important are the firm specific effects. That importance varies depending on the sector or the country in which the firm carries out its activity. The influence of firm effects accounts for around 81% of total variation and display a significantly lower degree of persistence, with adjustment speeds oscillating around 34%. However, this pattern is not homogeneous but depends on the sector and country analysed. Industry effects depends also on sector and country analysed have a more marginal importance, being significantly more persistent, with adjustment speeds oscillating around 7-8% with this degree of persistence being very similar for most of sectors and countries analysed.

Keywords: dynamic models, Bayesian inference, MCMC, abnormal returns, persistence of profits, return on assets

Procedia PDF Downloads 399
559 Intensification of Heat Transfer Using AL₂O₃-Cu/Water Hybrid Nanofluid in a Circular Duct Using Inserts

Authors: Muluken Biadgelegn Wollele, Mebratu Assaye Mengistu

Abstract:

Nanotechnology has created new opportunities for improving industrial efficiency and performance. One of the proposed approaches to improving the effectiveness of temperature exchangers is the use of nanofluids to improve heat transfer performance. The thermal conductivity of nanoparticles, as well as their size, diameter, and volume concentration, all played a role in influencing the rate of heat transfer. Nanofluids are commonly used in automobiles, energy storage, electronic component cooling, solar absorbers, and nuclear reactors. Convective heat transfer must be improved when designing thermal systems in order to reduce heat exchanger size, weight, and cost. Using roughened surfaces to promote heat transfer has been tried several times. Thus, both active and passive heat transfer methods show potential in terms of heat transfer improvement. There will be an added advantage of enhanced heat transfer due to the two methods adopted; however, pressure drop must be considered during flow. Thus, the current research aims to increase heat transfer by adding a twisted tap insert in a plain tube using a working fluid hybrid nanofluid (Al₂O₃-Cu) with a base fluid of water. A circular duct with inserts, a tube length of 3 meters, a hydraulic diameter of 0.01 meters, and tube walls with a constant heat flux of 20 kW/m² and a twist ratio of 125 was used to investigate Al₂O₃-Cu/H₂O hybrid nanofluid with inserts. The temperature distribution is better than with conventional tube designs due to stronger tangential contact and swirls in the twisted tape. The Nusselt number values of plain twisted tape tubes are 1.5–2.0 percent higher than those of plain tubes. When twisted tape is used instead of plain tube, performance evaluation criteria improve by 1.01 times. A heat exchanger that is useful for a number of heat exchanger applications can be built utilizing a mixed flow of analysis that incorporates passive and active methodologies.

Keywords: nanofluids, active method, passive method, Nusselt number, performance evaluation criteria

Procedia PDF Downloads 71
558 Radiation Protection and Licensing for an Experimental Fusion Facility: The Italian and European Approaches

Authors: S. Sandri, G. M. Contessa, C. Poggi

Abstract:

An experimental nuclear fusion device could be seen as a step toward the development of the future nuclear fusion power plant. If compared with other possible solutions to the energy problem, nuclear fusion has advantages that ensure sustainability and security. In particular considering the radioactivity and the radioactive waste produced, in a nuclear fusion plant the component materials could be selected in order to limit the decay period, making it possible the recycling in a new reactor after about 100 years from the beginning of the decommissioning. To achieve this and other pertinent goals many experimental machines have been developed and operated worldwide in the last decades, underlining that radiation protection and workers exposure are critical aspects of these facilities due to the high flux, high energy neutrons produced in the fusion reactions. Direct radiation, material activation, tritium diffusion and other related issues pose a real challenge to the demonstration that these devices are safer than the nuclear fission facilities. In Italy, a limited number of fusion facilities have been constructed and operated since 30 years ago, mainly at the ENEA Frascati Center, and the radiation protection approach, addressed by the national licensing requirements, shows that it is not always easy to respect the constraints for the workers' exposure to ionizing radiation. In the current analysis, the main radiation protection issues encountered in the Italian Fusion facilities are considered and discussed, and the technical and legal requirements are described. The licensing process for these kinds of devices is outlined and compared with that of other European countries. The following aspects are considered throughout the current study: i) description of the installation, plant and systems, ii) suitability of the area, buildings, and structures, iii) radioprotection structures and organization, iv) exposure of personnel, v) accident analysis and relevant radiological consequences, vi) radioactive wastes assessment and management. In conclusion, the analysis points out the needing of a special attention to the radiological exposure of the workers in order to demonstrate at least the same level of safety as that reached at the nuclear fission facilities.

Keywords: fusion facilities, high energy neutrons, licensing process, radiation protection

Procedia PDF Downloads 349
557 Problems concerning Formation of Institutional Framework for Electronic Democracy in Georgia

Authors: Giorgi Katamadze

Abstract:

Open public service and accountability towards citizens is an important feature of democratic state based on rule of law. Effective use of electronic resources simplifies bureaucratic procedures, makes direct communications, helps exchange information, ensures government’s openness and in general helps develop electronic/digital democracy. Development of electronic democracy should be a strategic dimension of Georgian governance. Formation of electronic democracy, its functional improvement should become an important dimension of the state’s information policy. Electronic democracy is based on electronic governance and implies modern information and communication systems, their adaptation to universal standards. E-democracy needs involvement of governments, voters, political parties and social groups in an electronic form. In the last years the process of interaction between the citizen and the state becomes simpler. This process is achieved by the use of modern technological systems which gives to a citizen a possibility to use different public services online. For example, the website my.gov.ge makes interaction between the citizen, business and the state more simple, comfortable and secure. A higher standard of accountability and interaction is being established. Electronic democracy brings new forms of interactions between the state and the citizen: e-engagement – participation of society in state politics via electronic systems; e-consultation – electronic interaction among public officials, citizens and interested groups; e-controllership – electronic rule and control of public expenses and service. Public transparency is one of the milestones of electronic democracy as well as representative democracy as only on mutual trust and accountability can democracy be established. In Georgia, institutional changes concerning establishment and development of electronic democracy are not enough. Effective planning and implementation of a comprehensive and multi component e-democracy program (central, regional, local levels) requires telecommunication systems, institutional (public service, competencies, logical system) and informational (relevant conditions for public involvement) support. Therefore, a systematic project of formation of electronic governance should be developed which will include central, regional, municipal levels and certain aspects of development of instrumental basis for electronic governance.

Keywords: e-democracy, e-governance, e-services, information technology, public administration

Procedia PDF Downloads 332
556 Wax Patterns for Integrally Cast Rotors/Stators of Aeroengine Gas Turbines

Authors: Pradyumna R., Sridhar S., A. Satyanarayana, Alok S. Chauhan, Baig M. A. H.

Abstract:

Modern turbine engines for aerospace applications need precision investment cast components such as integrally cast rotors and stators, for their hot end turbine stages. Traditionally, these turbines are used as starter engines. In recent times, such engines are also used for strategic missile applications. The rotor/stator castings consist of a central hub (shrouded in some designs) over which a number of aerofoil shaped blades are located. Since these components cannot be machined, investment casting is the only available route for manufacture and hence stringent dimensional aerospace quality has to be in-built in the casting process itself. In the process of investment casting, pattern generation by injection of wax into dedicated dies/moulds is the first critical step. Traditional approach deals in producing individual blades with hub/shroud features through wax injection and assembly of a set of such injected patterns onto a dedicated and precisely manufactured fixture to wax-weld and generate an integral wax pattern, a process known as the ‘segmental approach’. It is possible to design a single-injection die with retractable metallic inserts in the case of untwisted blades of stator patterns without the shroud. Such an approach is also possible for twisted blades of rotors with highly complex design of inter-blade inserts and retraction mechanisms. DMRL has for long established methods and procedures for the above to successfully supply precision castings for various defence related projects. In recent times, urea based soluble insert approach has also been successfully applied to overcome the need to design and manufacture a precision assembly fixture, leading to substantial reduction in component development times. Present paper deals in length various approaches tried and established at DMRL to generate precision wax patterns for aerospace quality turbine rotors and stators. In addition to this, the importance of simulation in solving issues related to wax injection is also touched upon.

Keywords: die/mold and fixtures, integral rotor/stator, investment casting, wax patterns, simulation

Procedia PDF Downloads 339
555 Adaptive Assemblies: A Scalable Solution for Atlanta's Affordable Housing Crisis

Authors: Claudia Aguilar, Amen Farooq

Abstract:

Among other cities in the United States, the city of Atlanta is experiencing levels of growth that surpass anything we have witnessed in the last century. With the surge of population influx, the available housing is practically bursting at the seams. Supply is low, and demand is high. In effect, the average one-bedroom apartment runs for 1,800 dollars per month. The city is desperately seeking new opportunities to provide affordable housing at an expeditious rate. This has been made evident by the recent updates to the city’s zoning. With the recent influx in the housing market, young professionals, in particular millennials, are desperately looking for alternatives to stay within the city. To remedy Atlanta’s affordable housing crisis, the city of Atlanta is planning to introduce 40 thousand of new affordable housing units by 2026. To achieve the urgent need for more affordable housing, the architectural response needs to adapt to overcome this goal. A method that has proven successful in modern housing is to practice modular means of development. A method that has been constrained to the dimensions of the max load for an eighteen-wheeler. This approach has diluted the architect’s ability to produce site-specific, informed design and rather contributes to the “cookie cutter” stigma that the method has been labeled with. This thesis explores the design methodology for modular housing by revisiting its constructability and adaptability. This research focuses on a modular housing type that could break away from the constraints of transport and deliver adaptive reconfigurable assemblies. The adaptive assemblies represent an integrated design strategy for assembling the future of affordable dwelling units. The goal is to take advantage of a component-based system and explore a scalable solution to modular housing. This proposal aims specifically to design a kit of parts that are made to be easily transported and assembled but also gives the ability to customize the use of components to benefit all unique conditions. The benefits of this concept could include decreased construction time, cost, on-site labor, and disruption while providing quality housing with affordable and flexible options.

Keywords: adaptive assemblies, modular architecture, adaptability, constructibility, kit of parts

Procedia PDF Downloads 78
554 The Evaluation of Antioxidant and Antimicrobial Activities of Essential Oil and Aqueous, Methanol, Ethanol, Ethyl Acetate and Acetone Extract of Hypericum scabrum

Authors: A. Heshmati, M. Y Alikhani, M. T. Godarzi, M. R. Sadeghimanesh

Abstract:

Herbal essential oil and extracts are a good source of natural antioxidants and antimicrobial compounds. Hypericum is one of the potential sources of these compounds. In this study, the antioxidant and antimicrobial activity of essential oil and aqueous, methanol, ethanol, ethyl acetate and acetone extract of Hypericum scabrum was assessed. Flowers of Hypericum scabrum were collected from the surrounding mountains of Hamadan province and after drying in the shade, the essential oil of the plant was extracted by Clevenger and water, methanol, ethanol, ethyl acetate and acetone extract was obtained by maceration method. Essential oil compounds were identified using the GC-Mass. The Folin-Ciocalteau and aluminum chloride (AlCl3) colorimetric method was used to measure the amount of phenolic acid and flavonoids, respectively. Antioxidant activity was evaluated using DPPH and FRAP. The minimum inhibitory concentration (MIC) and the minimum bacterial/fungicide concentration (MBC/MFC) of essential oil and extracts were evaluated against Staphylococcus aureus, Bacillus cereus, Pseudomonas aeruginosa, Salmonella typhimurium, Aspergillus flavus and Candida albicans. The essential oil yield of was 0.35%, the lowest and highest extract yield was related to ethyl acetate and water extract. The most component of essential oil was α-Pinene (46.35%). The methanol extracts had the highest phenolic acid (95.65 ± 4.72 µg galic acid equivalent/g dry plant) and flavonoids (25.39 ± 2.73 µg quercetin equivalent/g dry plant). The percentage of DPPH radical inhibition showed positive correlation with concentrations of essential oil or extract. The methanol and ethanol extract had the highest DDPH radical inhibitory. Essential oil and extracts of Hypericum had antimicrobial activity against the microorganisms studied in this research. The MIC and MBC values for essential oils were in the range of 25-25.6 and 25-50 μg/mL, respectively. For the extracts, these values were 1.5625-100 and 3.125-100 μg/mL, respectively. Methanol extracts had the highest antimicrobial activity. Essential oil and extract of Hypericum scabrum, especially methanol extract, have proper antimicrobial and antioxidant activity, and it can be used to control the oxidation and inhibit the growth of pathogenic and spoilage microorganisms. In addition, it can be used as a substitute for synthetic antioxidant and antimicrobial compounds.

Keywords: antimicrobial, antioxidant, extract, hypericum

Procedia PDF Downloads 321
553 Modelling Tyre Rubber Materials for High Frequency FE Analysis

Authors: Bharath Anantharamaiah, Tomas Bouda, Elke Deckers, Stijn Jonckheere, Wim Desmet, Juan J. Garcia

Abstract:

Automotive tyres are gaining importance recently in terms of their noise emission, not only with respect to reduction in noise, but also their perception and detection. Tyres exhibit a mechanical noise generation mechanism up to 1 kHz. However, owing to the fact that tyre is a composite of several materials, it has been difficult to model it using finite elements to predict noise at high frequencies. The currently available FE models have a reliability of about 500 Hz, the limit which, however, is not enough to perceive the roughness or sharpness of noise from tyre. These noise components are important in order to alert pedestrians on the street about passing by slow, especially electric vehicles. In order to model tyre noise behaviour up to 1 kHz, its dynamic behaviour must be accurately developed up to a 1 kHz limit using finite elements. Materials play a vital role in modelling the dynamic tyre behaviour precisely. Since tyre is a composition of several components, their precise definition in finite element simulations is necessary. However, during the tyre manufacturing process, these components are subjected to various pressures and temperatures, due to which these properties could change. Hence, material definitions are better described based on the tyre responses. In this work, the hyperelasticity of tyre component rubbers is calibrated, using the design of experiments technique from the tyre characteristic responses that are measured on a stiffness measurement machine. The viscoelasticity of rubbers are defined by the Prony series for rubbers, which are determined from the loss factor relationship between the loss and storage moduli, assuming that the rubbers are excited within the linear viscoelasticity ranges. These values of loss factor are measured and theoretically expressed as a function of rubber shore hardness or hyperelasticities. From the results of the work, there exists a good correlation between test and simulation vibrational transfer function up to 1 kHz. The model also allows flexibility, i.e., the frequency limit can also be extended, if required, by calibrating the Prony parameters of rubbers corresponding to the frequency of interest. As future work, these tyre models are used for noise generation at high frequencies and thus for tyre noise perception.

Keywords: tyre dynamics, rubber materials, prony series, hyperelasticity

Procedia PDF Downloads 189
552 Controlling RPV Embrittlement through Wet Annealing in Support of Life Extension

Authors: E. A. Krasikov

Abstract:

As a main barrier against radioactivity outlet reactor pressure vessel (RPV) is a key component in terms of NPP safety. Therefore, present-day demands in RPV reliability enhance have to be met by all possible actions for RPV in-service embrittlement mitigation. Annealing treatment is known to be the effective measure to restore the RPV metal properties deteriorated by neutron irradiation. There are two approaches to annealing. The first one is so-called ‘dry’ high temperature (~475°C) annealing. It allows obtaining practically complete recovery, but requires the removal of the reactor core and internals. External heat source (furnace) is required to carry out RPV heat treatment. The alternative approach is to anneal RPV at a maximum coolant temperature which can be obtained using the reactor core or primary circuit pumps while operating within the RPV design limits. This low temperature «wet» annealing, although it cannot be expected to produce complete recovery, is more attractive from the practical point of view especially in cases when the removal of the internals is impossible. The first RPV «wet» annealing was done using nuclear heat (US Army SM-1A reactor). The second one was done by means of primary pumps heat (Belgian BR-3 reactor). As a rule, there is no recovery effect up to annealing and irradiation temperature difference of 70°C. It is known, however, that along with radiation embrittlement neutron irradiation may mitigate the radiation damage in metals. Therefore, we have tried to test the possibility to use the effect of radiation-induced ductilization in ‘wet’ annealing technology by means of nuclear heat utilization as heat and neutron irradiation sources at once. In support of the above-mentioned conception the 3-year duration reactor experiment on 15Cr3NiMoV type steel with preliminary irradiation at operating PWR at 270°C and following extra irradiation (87 h at 330°C) at IR-8 test reactor was fulfilled. In fact, embrittlement was partly suppressed up to value equivalent to 1,5 fold neutron fluence decrease. The degree of recovery in case of radiation enhanced annealing is equal to 27% whereas furnace annealing results in zero effect under existing conditions. Mechanism of the radiation-induced damage mitigation is proposed. It is hoped that «wet » annealing technology will help provide a better management of the RPV degradation as a factor affecting the lifetime of nuclear power plants which, together with associated management methods, will help facilitate safe and economic long-term operation of PWRs.

Keywords: controlling, embrittlement, radiation, steel, wet annealing

Procedia PDF Downloads 373
551 Application of Thermal Dimensioning Tools to Consider Different Strategies for the Disposal of High-Heat-Generating Waste

Authors: David Holton, Michelle Dickinson, Giovanni Carta

Abstract:

The principle of geological disposal is to isolate higher-activity radioactive wastes deep inside a suitable rock formation to ensure that no harmful quantities of radioactivity reach the surface environment. To achieve this, wastes will be placed in an engineered underground containment facility – the geological disposal facility (GDF) – which will be designed so that natural and man-made barriers work together to minimise the escape of radioactivity. Internationally, various multi-barrier concepts have been developed for the disposal of higher-activity radioactive wastes. High-heat-generating wastes (HLW, spent fuel and Pu) provide a number of different technical challenges to those associated with the disposal of low-heat-generating waste. Thermal management of the disposal system must be taken into consideration in GDF design; temperature constraints might apply to the wasteform, container, buffer and host rock. Of these, the temperature limit placed on the buffer component of the engineered barrier system (EBS) can be the most constraining factor. The heat must therefore be managed such that the properties of the buffer are not compromised to the extent that it cannot deliver the required level of safety. The maximum temperature of a buffer surrounding a container at the centre of a fixed array of heat-generating sources, arises due to heat diffusing from neighbouring heat-generating wastes, incrementally contributing to the temperature of the EBS. A range of strategies can be employed for managing heat in a GDF, including the spatial arrangements or patterns of those containers; different geometrical configurations can influence the overall thermal density in a disposal facility (or area within a facility) and therefore the maximum buffer temperature. A semi-analytical thermal dimensioning tool and methodology have been applied at a generic stage to explore a range of strategies to manage the disposal of high-heat-generating waste. A number of examples, including different geometrical layouts and chequer-boarding, have been illustrated to demonstrate how these tools can be used to consider safety margins and inform strategic disposal options when faced with uncertainty, at a generic stage of the development of a GDF.

Keywords: buffer, geological disposal facility, high-heat-generating waste, spent fuel

Procedia PDF Downloads 280
550 The Role and Effects of Communication on Occupational Safety: A Review

Authors: Pieter A. Cornelissen, Joris J. Van Hoof

Abstract:

The interest in improving occupational safety started almost simultaneously with the beginning of the Industrial Revolution. Yet, it was not until the late 1970’s before the role of communication was considered in scientific research regarding occupational safety. In recent years the importance of communication as a means to improve occupational safety has increased. Not only as communication might have a direct effect on safety performance and safety outcomes, but also as it can be viewed as a major component of other important safety-related elements (e.g., training, safety meetings, leadership). And while safety communication is an increasingly important topic in research, its operationalization is often vague and differs among studies. This is not only problematic when comparing results, but also in applying these results to practice and the work floor. By means of an in-depth analysis—building on an existing dataset—this review aims to overcome these problems. The initial database search yielded 25.527 articles, which was reduced to a research corpus of 176 articles. Focusing on the 37 articles of this corpus that addressed communication (related to safety outcomes and safety performance), the current study will provide a comprehensive overview of the role and effects of safety communication and outlines the conditions under which communication contributes to a safer work environment. The study shows that in literature a distinction is commonly made between safety communication (i.e., the exchange or dissemination of safety-related information) and feedback (i.e. a reactive form of communication). And although there is a consensus among researchers that both communication and feedback positively affect safety performance, there is a debate about the directness of this relationship. Whereas some researchers assume a direct relationship between safety communication and safety performance, others state that this relationship is mediated by safety climate. One of the key findings is that despite the strongly present view that safety communication is a formal and top-down safety management tool, researchers stress the importance of open communication that encourages and allows employees to express their worries, experiences, views, and share information. This raises questions with regard to other directions (e.g., bottom-up, horizontal) and forms of communication (e.g., informal). The current review proposes a framework to overcome the often vague and different operationalizations of safety communication. The proposed framework can be used to characterize safety communication in terms of stakeholders, direction, and characteristics of communication (e.g., medium usage).

Keywords: communication, feedback, occupational safety, review

Procedia PDF Downloads 299
549 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques

Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu

Abstract:

Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.

Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare

Procedia PDF Downloads 60
548 Treating Complex Pain and Addictions with Bioelectrode Therapy: An Acupuncture Point Stimulus Method for Relieving Human Suffering

Authors: Les Moncrieff

Abstract:

In a world awash with potent opioids flaming an international crisis, the need to explore safe alternatives has never been more urgent. Bio-electrode Therapy is a novel adjunctive treatment method for relieving acute opioid withdrawal symptoms and many types of complex acute and chronic pain (often the underlying cause of opioid dependence). By combining the science of developmental bioelectricity with Traditional Chinese Medicine’s theory of meridians, rapid relief from pain is routinely being achieved in the clinical setting. Human body functions are dependent on electrical factors, and acupuncture points on the body are known to have higher electrical conductivity than surrounding skin tissue. When tiny gold- and silver-plated electrodes are secured to the skin at specific acupuncture points using established Chinese Medicine principles and protocols, an enhanced microcurrent and electrical field are created between the electrodes, influencing the entire meridian and connecting meridians. No external power source or electrical devices are required. Endogenous DC electric fields are an essential fundamental component for development, regeneration, and wound healing. Disruptions in the normal ion-charge in the meridians and circulation of blood will manifest as pain and development of disease. With the application of these simple electrodes (gold acting as cathode and silver as anode) according to protocols, the resulting microcurrent is directed along the selected meridians to target injured or diseased organs and tissues. When injured or diseased cells have been stimulated by the microcurrent and electrical fields, the permeability of the cell membrane is affected, resulting in an immediate relief of pain, a rapid balancing of positive and negative ions (sodium, potassium, etc.) in the cells, the restoration of intracellular fluid levels, replenishment of electrolyte levels, pH balance, removal of toxins, and a re-establishment of homeostasis.

Keywords: bioelectricity, electrodes, electrical fields, acupuncture meridians, complex pain, opioid withdrawal management

Procedia PDF Downloads 72
547 Streamwise Vorticity in the Wake of a Sliding Bubble

Authors: R. O’Reilly Meehan, D. B. Murray

Abstract:

In many practical situations, bubbles are dispersed in a liquid phase. Understanding these complex bubbly flows is therefore a key issue for applications such as shell and tube heat exchangers, mineral flotation and oxidation in water treatment. Although a large body of work exists for bubbles rising in an unbounded medium, that of bubbles rising in constricted geometries has received less attention. The particular case of a bubble sliding underneath an inclined surface is common to two-phase flow systems. The current study intends to expand this knowledge by performing experiments to quantify the streamwise flow structures associated with a single sliding air bubble under an inclined surface in quiescent water. This is achieved by means of two-dimensional, two-component particle image velocimetry (PIV), performed with a continuous wave laser and high-speed camera. PIV vorticity fields obtained in a plane perpendicular to the sliding surface show that there is significant bulk fluid motion away from the surface. The associated momentum of the bubble means that this wake motion persists for a significant time before viscous dissipation. The magnitude and direction of the flow structures in the streamwise measurement plane are found to depend on the point on its path through which the bubble enters the plane. This entry point, represented by a phase angle, affects the nature and strength of the vortical structures. This study reconstructs the vorticity field in the wake of the bubble, converting the field at different instances in time to slices of a large-scale wake structure. This is, in essence, Taylor’s ”frozen turbulence” hypothesis. Applying this to the vorticity fields provides a pseudo three-dimensional representation from 2-D data, allowing for a more intuitive understanding of the bubble wake. This study provides insights into the complex dynamics of a situation common to many engineering applications, particularly shell and tube heat exchangers in the nucleate boiling regime.

Keywords: bubbly flow, particle image velocimetry, two-phase flow, wake structures

Procedia PDF Downloads 371
546 Crack Growth Life Prediction of a Fighter Aircraft Wing Splice Joint Under Spectrum Loading Using Random Forest Regression and Artificial Neural Networks with Hyperparameter Optimization

Authors: Zafer Yüce, Paşa Yayla, Alev Taşkın

Abstract:

There are heaps of analytical methods to estimate the crack growth life of a component. Soft computing methods have an increasing trend in predicting fatigue life. Their ability to build complex relationships and capability to handle huge amounts of data are motivating researchers and industry professionals to employ them for challenging problems. This study focuses on soft computing methods, especially random forest regressors and artificial neural networks with hyperparameter optimization algorithms such as grid search and random grid search, to estimate the crack growth life of an aircraft wing splice joint under variable amplitude loading. TensorFlow and Scikit-learn libraries of Python are used to build the machine learning models for this study. The material considered in this work is 7050-T7451 aluminum, which is commonly preferred as a structural element in the aerospace industry, and regarding the crack type; corner crack is used. A finite element model is built for the joint to calculate fastener loads and stresses on the structure. Since finite element model results are validated with analytical calculations, findings of the finite element model are fed to AFGROW software to calculate analytical crack growth lives. Based on Fighter Aircraft Loading Standard for Fatigue (FALSTAFF), 90 unique fatigue loading spectra are developed for various load levels, and then, these spectrums are utilized as inputs to the artificial neural network and random forest regression models for predicting crack growth life. Finally, the crack growth life predictions of the machine learning models are compared with analytical calculations. According to the findings, a good correlation is observed between analytical and predicted crack growth lives.

Keywords: aircraft, fatigue, joint, life, optimization, prediction.

Procedia PDF Downloads 170
545 Parametrical Analysis of Stain Removal Performance of a Washing Machine: A Case Study of Sebum

Authors: Ozcan B., Koca B., Tuzcuoglu E., Cavusoglu S., Efe A., Bayraktar S.

Abstract:

A washing machine is mainly used for removing any types of dirt and stains and also eliminating malodorous substances from textile surfaces. Stains originate from various sources from the human body to environmental contamination. Therefore, there are various methods for removing them. They are roughly classified into four different groups: oily (greasy) stains, particulate stains, enzymatic stains and bleachable (oxidizable) stains. Oily stains on clothes surfaces are a common result of being in contact with organic substances of the human body (e.g. perspiration, skin shedding and sebum) or by being exposed to an oily environmental pollutant (e.g. oily foods). Studies showed that human sebum is major component of oily soil found on the garments, and if it is aged under the several environmental conditions, it can generate obstacle yellow stains on the textile surface. In this study, a parametric study was carried out to investigate the key factors affecting the cleaning performance (specifically sebum removal performance) of a washing machine. These parameters are mechanical agitation percentage of tumble, consumed water and total washing period. A full factorial design of the experiment is used to capture all the possible parametric interactions using Minitab 2021 statistical program. Tests are carried out with commercial liquid detergent and 2 different types of sebum-soiled cotton and cotton + polyester fabrics. Parametric results revealed that for both test samples, increasing the washing time and the mechanical agitation could lead to a much better removal result of sebum. However, for each sample, the water amount had different outcomes. Increasing the water amount decreases the performance of cotton + polyester fabrics, while it is favorable for cotton fabric. Besides this, it was also discovered that the type of textile can greatly affect the sebum removal performance. Results showed that cotton + polyester fabrics are much easier to clean compared to cotton fabric

Keywords: laundry, washing machine, low-temperature washing, cold wash, washing efficiency index, sustainability, cleaning performance, stain removal, oily soil, sebum, yellowing

Procedia PDF Downloads 136
544 Continuous Improvement of Teaching Quality through Course Evaluation by the Students

Authors: Valerie Follonier, Henrike Hamelmann, Jean-Michel Jullien

Abstract:

The Distance Learning University in Switzerland (UniDistance) is offering bachelor and master courses as well as further education programs. The professors and their assistants work at traditional Swiss universities and are giving their courses at UniDistance following a blended learning and flipped classroom approach. A standardized course evaluation by the students has been established as a component of a quality improvement process. The students’ feedback enables the stakeholders to identify areas of improvement, initiate professional development for the teaching teams and thus continuously augment the quality of instruction. This paper describes the evaluation process, the tools involved and how the approach involving all stakeholders helps forming a culture of quality in teaching. Additionally, it will present the first evaluation results following the new process. Two software tools have been developed to support all stakeholders in the process of the semi-annual formative evaluation. The first tool allows to create the survey and to assign it to the relevant courses and students. The second tool presents the results of the evaluation to the stakeholders, providing specific features for the teaching teams, the dean, the directorate and EDUDL+ (Educational development unit distance learning). The survey items were selected in accordance with the e-learning strategy of the institution and are formulated to support the professional development of the teaching teams. By reviewing the results the teaching teams become aware of the opinion of the students and are asked to write a feedback for the attention of their dean. The dean reviews the results of the faculty and writes a general report about the situation of the faculty and the possible improvements intended. Finally, EDUDL+ writes a final report summarising the evaluation results. A mechanism of adjustable warnings allows it to generate quality indicators for each module. These are summarised for each faculty and globally for the whole institution in order to increase the vigilance of the responsible. The quality process involves changing the indicators regularly to focus on different areas each semester, to facilitate the professional development of the teaching teams and to progressively augment the overall teaching quality of the institution.

Keywords: continuous improvement process, course evaluation, distance learning, software tools, teaching quality

Procedia PDF Downloads 254
543 Comparative Effect of Self-Myofascial Release as a Warm-Up Exercise on Functional Fitness of Young Adults

Authors: Gopal Chandra Saha, Sumanta Daw

Abstract:

Warm-up is an essential component for optimizing performance in various sports before a physical fitness training session. This study investigated the immediate comparative effect of Self-Myofascial Release through vibration rolling (VR), non-vibration rolling (NVR), and static stretching as a part of a warm-up treatment on the functional fitness of young adults. Functional fitness is a classification of training that prepares the body for real-life movements and activities. For the present study 20male physical education students were selected as subjects. The age of the subjects was ranged from 20-25 years. The functional fitness variables undertaken in the present study were flexibility, muscle strength, agility, static and dynamic balance of the lower extremity. Each of the three warm-up protocol was administered on consecutive days, i.e. 24 hr time gap and all tests were administered in the morning. The mean and SD were used as descriptive statistics. The significance of statistical differences among the groups was measured by applying ‘F’-test, and to find out the exact location of difference, Post Hoc Test (Least Significant Difference) was applied. It was found from the study that only flexibility showed significant difference among three types of warm-up exercise. The observed result depicted that VR has more impact on myofascial release in flexibility in comparison with NVR and stretching as a part of warm-up exercise as ‘p’ value was less than 0.05. In the present study, within the three means of warm-up exercises, vibration roller showed better mean difference in terms of NVR, and static stretching exercise on functional fitness of young physical education practitioners, although the results were found insignificant in case of muscle strength, agility, static and dynamic balance of the lower extremity. These findings suggest that sports professionals and coaches may take VR into account for designing more efficient and effective pre-performance routine for long term to improve exercise performances. VR has high potential to interpret into an on-field practical application means.

Keywords: self-myofascial release, functional fitness, foam roller, physical education

Procedia PDF Downloads 131
542 Analysis of Complex Business Negotiations: Contributions from Agency-Theory

Authors: Jan Van Uden

Abstract:

The paper reviews classical agency-theory and its contributions to the analysis of complex business negotiations and gives an approach for the modification of the basic agency-model in order to examine the negotiation specific dimensions of agency-problems. By illustrating fundamental potentials for the modification of agency-theory in context of business negotiations the paper highlights recent empirical research that investigates agent-based negotiations and inter-team constellations. A general theoretical analysis of complex negotiation would be based on a two-level approach. First, the modification of the basic agency-model in order to illustrate the organizational context of business negotiations (i.e., multi-agent issues, common-agencies, multi-period models and the concept of bounded rationality). Second, the application of the modified agency-model on complex business negotiations to identify agency-problems and relating areas of risk in the negotiation process. The paper is placed on the first level of analysis – the modification. The method builds on the one hand on insights from behavior decision research (BRD) and on the other hand on findings from agency-theory as normative directives to the modification of the basic model. Through neoclassical assumptions concerning the fundamental aspects of agency-relationships in business negotiations (i.e., asymmetric information, self-interest, risk preferences and conflict of interests), agency-theory helps to draw solutions on stated worst-case-scenarios taken from the daily negotiation routine. As agency-theory is the only universal approach able to identify trade-offs between certain aspects of economic cooperation, insights obtained provide a deeper understanding of the forces that shape business negotiation complexity. The need for a modification of the basic model is illustrated by highlighting selected issues of business negotiations from agency-theory perspective: Negotiation Teams require a multi-agent approach under the condition that often decision-makers as superior-agents are part of the team. The diversity of competences and decision-making authority is a phenomenon that overrides the assumptions of classical agency-theory and varies greatly in context of certain forms of business negotiations. Further, the basic model is bound to dyadic relationships preceded by the delegation of decision-making authority and builds on a contractual created (vertical) hierarchy. As a result, horizontal dynamics within the negotiation team playing an important role for negotiation success are therefore not considered in the investigation of agency-problems. Also, the trade-off between short-term relationships within the negotiation sphere and the long-term relationships of the corporate sphere calls for a multi-period perspective taking into account the sphere-specific governance-mechanisms already established (i.e., reward and monitoring systems). Within the analysis, the implementation of bounded rationality is closely related to findings from BRD to assess the impact of negotiation behavior on underlying principal-agent-relationships. As empirical findings show, the disclosure and reservation of information to the agent affect his negotiation behavior as well as final negotiation outcomes. Last, in context of business negotiations, asymmetric information is often intended by decision-makers acting as superior-agents or principals which calls for a bilateral risk-approach to agency-relations.

Keywords: business negotiations, agency-theory, negotiation analysis, interteam negotiations

Procedia PDF Downloads 137
541 Management of Third Stage Labour in a Rural Ugandan Hospital

Authors: Brid Dinnee, Jessica Taylor, Joseph Hartland, Michael Natarajan

Abstract:

Background:The third stage of labour (TSL) can be complicated by Post-Partum Haemorrhage (PPH), which can have a significant impact on maternal mortality and morbidity. In Africa, 33.9% of maternal deaths are attributable to PPH1. In order to minimise this figure, current recommendations for the developing world are that all women have active management of the third stage of labour (AMTSL). The aim of this project was to examine TSL practice in a rural Ugandan Hospital, highlight any deviation from best practice and identify barriers to change in resource limited settings as part of a 4th year medical student External Student Selected Component field trip. Method: Five key elements from the current World Health Organisation (WHO) guidelines on AMTSL were used to develop an audit tool. All daytime vaginal deliveries over a two week period in July 2016 were audited. In addition to this, a retrospective comparison of PPH rates, between 2006 (when ubiquitous use of intramuscular oxytocin for management of TSL was introduced) and 2015 was performed. Results: Eight vaginal deliveries were observed; at all of which intramuscular oxytocin was administered and controlled cord traction used. Against WHO recommendation, all umbilical cords were clamped within one minute, and no infants received early skin-to-skin contact. In only one case was uterine massage performed after placental delivery. A retrospective comparison of data rates identified a 40% reduction in total number of PPHs from November 2006 to November 2015. Maternal deaths per delivery reduced from 2% to 0.5%. Discussion: Maternal mortality and PPH are still major issues in developing countries. Maternal mortality due to PPH can be reduced by good practices regarding TSL, but not all of these are used in low-resource settings. There is a notable difference in outcomes between the developed and developing world. At Kitovu Hospital, there has been a reduction in maternal mortality and number of PPHs following introduction of IM Oxytocin administration. In order to further improve these rates, staff education and further government funding is key.

Keywords: post-partum haemorrhage, PPH, third stage labour, Uganda

Procedia PDF Downloads 204
540 Effects of Mindfulness Practice on Clinician Burnout: A Scoping Review

Authors: Hani Malik

Abstract:

Background: Clinician burnout is a growing phenomenon in current health systems worldwide. Increasing emotional exhaustion, depersonalisation, and reduced personal accomplishment threaten the effective delivery of healthcare. This can potentially be mitigated by mindfulness practice, which has shown promising results in reducing burnout, restoring compassion, and preventing moral injury in clinicians. Objectives: To conduct a scoping review and identify high-quality studies on mindfulness practice in clinician burnout, synthesize themes that emerge from these studies, and discuss the implications of the results to healthcare leadership and innovation. Methodology: A focused scoping review was carried out to investigate the effects of mindfulness practice on clinician burnout. High-ranking journals were targeted to analyse high-quality studies and synthesize common themes in the literature. Studies conducted on current, practicing physicians were included. Mindfulness practice of varying forms was the main intervention studied. Grey literature and studies conducted only on allied health personnel were excluded from this review. Analysis:31 studies were included in this scoping review. Mindfulness practice was found to decrease emotional exhaustion and depersonalisation while improving mood, responses to stress, and vigour. Self-awareness, compassion, and empathy were also increased in study participants. From this review, four themes emerged which include: innovations in mindfulness practice, mindfulness and positive psychology, the impact of mindfulness on work and patient care, and barriers and facilitators to clinician mindfulness practice. Conclusion: Mindfulness had widely been reported to benefit mental health and well-being, but the studies reviewed seemed to adopt a mono focus and omitted key considerations to healthcare leadership, systems-level culture, and practices. Mindfulness practice is a quintessential component of positive psychology and is inherently linked to effective leadership. A mindful and compassionate clinician leader will play a crucial role in addressing gaps in current practice, prioritise staff mental health, and provide a supportive platform for innovation.

Keywords: mindfulness practice, clinician burnout, healthcare leadership, COVID-19

Procedia PDF Downloads 150
539 Seismic Assessment of Non-Structural Component Using Floor Design Spectrum

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Experiences in the past earthquakes have clearly demonstrated the necessity of seismic design and assessment of Non-Structural Components (NSCs) particularly in post-disaster structures such as hospitals, power plants, etc. as they have to be permanently functional and operational. Meeting this objective is contingent upon having proper seismic performance of both structural and non-structural components. Proper seismic design, analysis, and assessment of NSCs can be attained through generation of Floor Design Spectrum (FDS) in a similar fashion as target spectrum for structural components. This paper presents the developed methodology to generate FDS directly from corresponding Uniform Hazard Spectrum (UHS) (i.e. design spectra for structural components). The methodology is based on the experimental and numerical analysis of a database of 27 real Reinforced Concrete (RC) buildings which are located in Montreal, Canada. The buildings were tested by Ambient Vibration Measurements (AVM) and their dynamic properties have been extracted and used as part of the approach. Database comprises 12 low-rises, 10 medium-rises, and 5 high-rises and they are mostly designated as post-disaster\emergency shelters by the city of Montreal. The buildings are subjected to 20 compatible seismic records to UHS of Montreal and Floor Response Spectra (FRS) are developed for every floors in two horizontal direction considering four different damping ratios of NSCs (i.e. 2, 5, 10, and 20 % viscous damping). Generated FRS (approximately 132’000 curves) are statistically studied and the methodology is proposed to generate the FDS directly from corresponding UHS. The approach is capable of generating the FDS for any selection of floor level and damping ratio of NSCs. It captures the effect of: dynamic interaction between primary (structural) and secondary (NSCs) systems, higher and torsional modes of primary structure. These are important improvements of this approach compared to conventional methods and code recommendations. Application of the proposed approach are represented here through two real case-study buildings: one low-rise building and one medium-rise. The proposed approach can be used as practical and robust tool for seismic assessment and design of NSCs especially in existing post-disaster structures.

Keywords: earthquake engineering, operational and functional components, operational modal analysis, seismic assessment and design

Procedia PDF Downloads 209