Search results for: biochemical approaches
535 Employee Wellbeing: The Key to Organizational Success
Authors: Crystal Hoole
Abstract:
Employee well-being has become an area of concern for top executives and organizations worldwide. In developing countries such as South Africa, and especially in the educational sector, employees have to deal with anxiety, stress, fear, student protests, political and economic turmoil and excessive work demands on a daily basis. Research has shown that workplaces with higher resilience and better well-being strategies also report higher productivity, increased innovation, better employee retention and better employee engagement. Many organisations offer standard employee assistance programs and once-off short interventions. However, most of these well-being initiatives are perceived as ineffective. Some of the criticism centers around a lack of holistic well-being approaches, no proof of the success of well-being initiatives, not being part of the organization’s strategies and a lack of genuine leadership support. This study attempts to illustrate how a holistic well-being intervention, over a period of 100 days, is far more effective in impacting organizational outcomes. A quasi-experimental design, with a pre-test and pro-test design with a randomization strategy, will be used. Measurements of organizational outcomes are taken at three-time points throughout the study, before, middle and after. The constructs that will be measured are employee engagement, psychological well-being, organizational culture and trust, and perceived stress. The well-being is imitative follows a salutogenesis approach and is aimed at building resilience through focusing on six focal areas, namely sleep, mindful eating, exercise, love, gratitude and appreciation, breath work and mindfulness, and finally, purpose. Certain organizational constructs, including employee engagement, psychological well-being, organizational culture and trust and perceived stress, will be measured at three-time points during the study, namely before, middle and after. A quasi-experimental, pre-test and post-test design will be applied, also using a randomization strategy to limit potential bias. Repeated measure ANCOVA will be used to determine whether any change occurred over the period of 100 days. The study will take place in a Higher Education institution in South Africa. The sample will consist of academic and administrative staff. Participants will be assigned to a test and control group. All participants will complete a survey measuring employee engagement, psychological well-being, organizational culture and trust, and perceived stress. Only the test group will undergo the well-being intervention. The study envisages contributing on several levels: Firstly, the study hopes to find a positive increase in the various well-being indicators of the participants who participated in the study and secondly to illustrate that a longer more holistic approach is successful in improving organisational success (as measured in the various organizational outcomes).Keywords: wellbeing, resilience, organizational success, intervention
Procedia PDF Downloads 101534 The Invisible Planner: Unearthing the Informal Dynamics Shaping Mixed-Use and Compact Development in Ghanaian Cities
Authors: Muwaffaq Usman Adam, Isaac Quaye, Jim Anbazu, Yetimoni Kpeebi, Michael Osei-Assibey
Abstract:
Urban informality, characterized by spontaneous and self-organized practices, plays a significant but often overlooked role in shaping the development of cities, particularly in the context of mixed-use and compact urban environments. This paper aims to explore the invisible planning processes inherent in informal practices and their influence on the urban form of Ghanaian cities. By examining the dynamic interplay between informality and formal planning, the study will discuss the ways in which informal actors shape and plan for mixed-use and compact development. Drawing on the synthesis of relevant secondary data, the research will begin by defining urban informality and identifying the factors that contribute to its prevalence in Ghanaian cities. It will delve into the concept of mixed-use and compact development, highlighting its benefits and importance in urban areas. Drawing on case studies, the paper will uncover the hidden planning processes that occur within informal settlements, showcasing their impact on the physical layout, land use, and spatial arrangements of Ghanaian cities. The study will also uncover the challenges and opportunities associated with informal planning. It examines the constraints faced by informal planners (actors) while also exploring the potential benefits and opportunities that emerge when informality is integrated into formal planning frameworks. By understanding the invisible planner, the research will offer valuable insights into how informal practices can contribute to sustainable and inclusive urban development. Based on the findings, the paper will present policy implications and recommendations. It highlights the need to bridge the policy gaps and calls for the recognition of informal planning practices within formal systems. Strategies are proposed to integrate informality into planning frameworks, fostering collaboration between formal and informal actors to achieve compact and mixed-use development in Ghanaian cities. This research underscores the importance of recognizing and leveraging the invisible planner in Ghanaian cities. By embracing informal planning practices, cities can achieve more sustainable, inclusive, and vibrant urban environments that meet the diverse needs of their residents. This research will also contribute to a deeper understanding of the complex dynamics between informality and planning, advocating for inclusive and collaborative approaches that harness the strengths of both formal and informal actors. The findings will likewise contribute to advancing our understanding of informality's role as an invisible yet influential planner, shedding light on its spatial planning implications on Ghanaian cities.Keywords: informality, mixed-uses, compact development, land use, ghana
Procedia PDF Downloads 126533 Moving Target Defense against Various Attack Models in Time Sensitive Networks
Authors: Johannes Günther
Abstract:
Time Sensitive Networking (TSN), standardized in the IEEE 802.1 standard, has been lent increasing attention in the context of mission critical systems. Such mission critical systems, e.g., in the automotive domain, aviation, industrial, and smart factory domain, are responsible for coordinating complex functionalities in real time. In many of these contexts, a reliable data exchange fulfilling hard time constraints and quality of service (QoS) conditions is of critical importance. TSN standards are able to provide guarantees for deterministic communication behaviour, which is in contrast to common best-effort approaches. Therefore, the superior QoS guarantees of TSN may aid in the development of new technologies, which rely on low latencies and specific bandwidth demands being fulfilled. TSN extends existing Ethernet protocols with numerous standards, providing means for synchronization, management, and overall real-time focussed capabilities. These additional QoS guarantees, as well as management mechanisms, lead to an increased attack surface for potential malicious attackers. As TSN guarantees certain deadlines for priority traffic, an attacker may degrade the QoS by delaying a packet beyond its deadline or even execute a denial of service (DoS) attack if the delays lead to packets being dropped. However, thus far, security concerns have not played a major role in the design of such standards. Thus, while TSN does provide valuable additional characteristics to existing common Ethernet protocols, it leads to new attack vectors on networks and allows for a range of potential attacks. One answer to these security risks is to deploy defense mechanisms according to a moving target defense (MTD) strategy. The core idea relies on the reduction of the attackers' knowledge about the network. Typically, mission-critical systems suffer from an asymmetric disadvantage. DoS or QoS-degradation attacks may be preceded by long periods of reconnaissance, during which the attacker may learn about the network topology, its characteristics, traffic patterns, priorities, bandwidth demands, periodic characteristics on links and switches, and so on. Here, we implemented and tested several MTD-like defense strategies against different attacker models of varying capabilities and budgets, as well as collaborative attacks of multiple attackers within a network, all within the context of TSN networks. We modelled the networks and tested our defense strategies on an OMNET++ testbench, with networks of different sizes and topologies, ranging from a couple dozen hosts and switches to significantly larger set-ups.Keywords: network security, time sensitive networking, moving target defense, cyber security
Procedia PDF Downloads 74532 Effect of Hypoxia on AOX2 Expression in Chlamydomonas reinhardtii
Authors: Maria Ostroukhova, Zhanneta Zalutskaya, Elena Ermilova
Abstract:
The alternative oxidase (AOX) mediates cyanide-resistant respiration, which bypasses proton-pumping complexes III and IV of the cytochrome pathway to directly transfer electrons from reduced ubiquinone to molecular oxygen. In Chlamydomonas reinhardtii, AOX is a monomeric protein that is encoded by two genes of discrete subfamilies, AOX1 and AOX2. Although AOX has been proposed to play essential roles in stress tolerance of organisms, the role of subfamily AOX2 is largely unknown. In C. reinhardtii, AOX2 was initially identified as one of constitutively low expressed genes. Like other photosynthetic organisms C. reinhardtii cells frequently experience periods of hypoxia. To examine AOX2 transcriptional regulation and role of AOX2 in hypoxia adaptation, real-time PCR analysis and artificial microRNA method were employed. Two experimental approaches have been used to induce the anoxic conditions: dark-anaerobic and light-anaerobic conditions. C. reinhardtii cells exposed to the oxygen deprivation have shown increased AOX2 mRNA levels. By contrast, AOX1 was not an anoxia-responsive gene. In C. reinhardtii, a subset of genes is regulated by transcription factor CRR1 in anaerobic conditions. Notable, the AOX2 promoter region contains the potential motif for CRR1 binding. Therefore, the role of CRR1 in the control of AOX2 transcription was tested. The CRR1-underexpressing strains, that were generated and characterized in this work, exhibited low levels of AOX2 transcripts under anoxic conditions. However, the transformants still slightly induced AOX2 gene expression in the darkness. These confirmed our suggestions that darkness is a regulatory stimulus for AOX genes in C. reinhardtii. Thus, other factors must contribute to AOX2 promoter activity under dark-anoxic conditions. Moreover, knock-down of CRR1 caused a complete reduction of AOX2 expression under light-anoxic conditions. These results indicate that (1) CRR1 is required for AOX2 expression during hypoxia, and (2) AOX2 gene is regulated by CRR1 together with yet-unknown regulatory factor(s). In addition, the AOX2-underexpressing strains were generated. The analysis of amiRNA-AOX2 strains suggested a role of this alternative oxidase in hypoxia adaptation of the alga. In conclusion, the results reported here show that C. reinhardtii AOX2 gene is stress inducible. CRR1 transcriptional factor is involved in the regulation of the AOX2 gene expression in the absence of oxygen. Moreover, AOX2 but not AOX1 functions under oxygen deprivation. This work was supported by Russian Science Foundation (research grant № 16-14-10004).Keywords: alternative oxidase 2, artificial microRNA approach, chlamydomonas reinhardtii, hypoxia
Procedia PDF Downloads 242531 Communicative Competence Is About Speaking a Lot: Teacher’s Voice on the Art of Developing Communicative Competence
Authors: Bernice Badal
Abstract:
The South African English curriculum emphasizes the adoption of the Communicative Approach (CA) using Communicative Language Teaching (CLT) methodologies to develop English as a second language (ESL) learners’ communicative competence in contexts such as township schools in South Africa. However, studies indicate that the adoption of the approach largely remains a rhetoric. Poor English language proficiency among learners and poor student performance, which continues from the secondary to the tertiary phase, is widely attributed to a lack of English language proficiency in South Africa. Consequently, this qualitative study, using a mix of classroom observations and interviews, sought to investigate teacher knowledge of Communicative Competence and the methods and strategies ESL teachers used to develop their learners’ communicative competence. The success of learners’ ability to develop communicative competence in contexts such as township schools in South Africa is inseparable from materials, tasks, teacher knowledge and how they implement the approach in the classrooms. Accordingly, teacher knowledge of the theory and practical implications of the CLT approach is imperative for the negotiation of meaning and appropriate use of language in context in resource-impoverished areas like the township. Using a mix of interviews and observations as data sources, this qualitative study examined teachers’ definitions and knowledge of Communicative competence with a focus on how it influenced their classroom practices. The findings revealed that teachers were not familiar with the notion of communicative competence, the communication process, and the underpinnings of CLT. Teachers’ narratives indicated an awareness that there should be interactions and communication in the classroom, but a lack of theoretical understanding of the types of communication necessary scuttled their initiatives. Thus, conceptual deficiency influences teachers’ practices as they engage in classroom activities in a superficial manner or focus on stipulated learner activities prescribed by the CAPS document. This study, therefore, concluded that partial or limited conceptual and coherent understandings with ‘teacher-proof’ stipulations for classroom practice do not inspire teacher efficacy and mastery of prescribed approaches; thus, more efforts should be made by the Department of Basic Education to strengthen the existing Professional Development workshops to support teachers in improving their understandings and application of CLT for the development of Communicative competence in their learners. The findings of the study contribute to the field of teacher knowledge acquisition, teacher beliefs and practices and professional development in the context of second language teaching and learning with a recommendation that frameworks for the development of communicative competence with wider applicability in resource-poor environments be developed to support teacher understanding and application in classrooms.Keywords: communicative competence, CLT, conceptual understanding of reforms, professional development
Procedia PDF Downloads 60530 Numerical Simulation of Von Karman Swirling Bioconvection Nanofluid Flow from a Deformable Rotating Disk
Authors: Ali Kadir, S. R. Mishra, M. Shamshuddin, O. Anwar Beg
Abstract:
Motivation- Rotating disk bio-reactors are fundamental to numerous medical/biochemical engineering processes including oxygen transfer, chromatography, purification and swirl-assisted pumping. The modern upsurge in biologically-enhanced engineering devices has embraced new phenomena including bioconvection of micro-organisms (photo-tactic, oxy-tactic, gyrotactic etc). The proven thermal performance superiority of nanofluids i.e. base fluids doped with engineered nanoparticles has also stimulated immense implementation in biomedical designs. Motivated by these emerging applications, we present a numerical thermofluid dynamic simulation of the transport phenomena in bioconvection nanofluid rotating disk bioreactor flow. Methodology- We study analytically and computationally the time-dependent three-dimensional viscous gyrotactic bioconvection in swirling nanofluid flow from a rotating disk configuration. The disk is also deformable i.e. able to extend (stretch) in the radial direction. Stefan blowing is included. The Buongiorno dilute nanofluid model is adopted wherein Brownian motion and thermophoresis are the dominant nanoscale effects. The primitive conservation equations for mass, radial, tangential and axial momentum, heat (energy), nanoparticle concentration and micro-organism density function are formulated in a cylindrical polar coordinate system with appropriate wall and free stream boundary conditions. A mass convective condition is also incorporated at the disk surface. Forced convection is considered i.e. buoyancy forces are neglected. This highly nonlinear, strongly coupled system of unsteady partial differential equations is normalized with the classical Von Karman and other transformations to render the boundary value problem (BVP) into an ordinary differential system which is solved with the efficient Adomian decomposition method (ADM). Validation with earlier Runge-Kutta shooting computations in the literature is also conducted. Extensive computations are presented (with the aid of MATLAB symbolic software) for radial and circumferential velocity components, temperature, nanoparticle concentration, micro-organism density number and gradients of these functions at the disk surface (radial local skin friction, local circumferential skin friction, Local Nusselt number, Local Sherwood number, motile microorganism mass transfer rate). Main Findings- Increasing radial stretching parameter decreases radial velocity and radial skin friction, reduces azimuthal velocity and skin friction, decreases local Nusselt number and motile micro-organism mass wall flux whereas it increases nano-particle local Sherwood number. Disk deceleration accelerates the radial flow, damps the azimuthal flow, decreases temperatures and thermal boundary layer thickness, depletes the nano-particle concentration magnitudes (and associated nano-particle species boundary layer thickness) and furthermore decreases the micro-organism density number and gyrotactic micro-organism species boundary layer thickness. Increasing Stefan blowing accelerates the radial flow and azimuthal (circumferential flow), elevates temperatures of the nanofluid, boosts nano-particle concentration (volume fraction) and gyrotactic micro-organism density number magnitudes whereas suction generates the reverse effects. Increasing suction effect reduces radial skin friction and azimuthal skin friction, local Nusselt number, and motile micro-organism wall mass flux whereas it enhances the nano-particle species local Sherwood number. Conclusions - Important transport characteristics are identified of relevance to real bioreactor nanotechnological systems not discussed in previous works. ADM is shown to achieve very rapid convergence and highly accurate solutions and shows excellent promise in simulating swirling multi-physical nano-bioconvection fluid dynamics problems. Furthermore, it provides an excellent complement to more general commercial computational fluid dynamics simulations.Keywords: bio-nanofluids, rotating disk bioreactors, Von Karman swirling flow, numerical solutions
Procedia PDF Downloads 157529 Principles and Guidance for the Last Days of Life: Te Ara Whakapiri
Authors: Tania Chalton
Abstract:
In June 2013, an independent review of the Liverpool Care Pathway (LCP) identified a number of problems with the implementation of the LCP in the UK and recommended that it be replaced by individual care plans for each patient. As a result of the UK findings, in November 2013 the Ministry of Health (MOH) commissioned the Palliative Care Council to initiate a programme of work to investigate an appropriate approach for the care of people in their last days of life in New Zealand (NZ). The Last Days of Life Working Group commenced a process to develop national consensus on the care of people in their last days of life in April 2014. In order to develop its advice for the future provision of care to people in their last days of life, the Working Group (WG) established a comprehensive work programme and as a result has developed a series of working papers. Specific areas of focus included: An analysis of the UK Independent Review findings and an assessment of these findings to the NZ context. A stocktake of services providing care to people in their last days of life, including aged residential care (ARC); hospices; hospitals; and primary care. International and NZ literature reviews of evidence and best practice. Survey of family to understand the consumer perspective on the care of people in their last days of life. Key aspects of care that required further considerations for NZ were: Terminology: clarify terminology used in the last days of life and in relation to death and dying. Evidenced based: including specific review of evidence regarding, spiritual, culturally appropriate care as well as dementia care. Diagnosis of dying: need for both guidance around the diagnosis of dying and communication with family. Workforce issues: access to an appropriate workforce after hours. Nutrition and hydration: guidance around appropriate approaches to nutrition and hydration. Symptom and pain management: guidance around symptom management. Documentation: documentation of the person’s care which is robust enough for data collection and auditing requirements, not ‘tick box’ approach to care. Education and training: improved consistency and access to appropriate education and training. Leadership: A dedicated team or person to support and coordinate the introduction and implementation of any last days of life model of care. Quality indicators and data collection: model of care to enable auditing and regular reviews to ensure on-going quality improvement. Cultural and spiritual: address and incorporate any cultural and spiritual aspects. A final document was developed incorporating all the evidence which provides guidance to the health sector on best practice for people at end of life: “Principles and guidance for the last days of life: Te Ara Whakapiri”.Keywords: end of life, guidelines, New Zealand, palliative care
Procedia PDF Downloads 435528 Promoting Gender Diversity in the UN Peacekeeping Operations: An Analysis of Factors Influencing Female Military Troops Deployment
Authors: Rahab Kisio
Abstract:
The persistent underrepresentation of female miltary in United Nations (UN) peacekeeping missions remains a critical concern for addressing the multifaceted challenges in conflict-affected regions. This research explores the factors influencing countries’ decisions to deploy female military troops to UN peacekeeping operations, examining data ranging from 2010 to 2020. The study highlights the urgent need for policymakers and international organizations to recognize gender equality as key instrument in dealing with sexual exploitation and abuse within these missions. The study suggests three reasons for the low female military troops deployment. Firstly, countries actively breaking down barriers for women in the workforce are more likely to send female military troops. Secondly, nations supporting women in politics are more likely to deploy female military troops, showing their value for gender equality. Lastly, countries with a history of conflict may send more female military troops to align with the UN's call and potentially gain international support in future conflicts. Theoretical approaches are presented to explore these motivations further, and the study uses negative binomial regression with country-year as the unit of analysis to test various explanations for a country's contribution of female military troops to UN peacekeeping missions. Findings shows that there is a connection between troop contributing countries’ gender equality and the participation of female military troops in peacekeeping operations. Nations that prioritize gender equality and empower women have a higher likelihood of deploying more female military personnel. The study emphasizes the significance of women in political leadership, indicating that countries actively addressing barriers to women's political representation are more willing to contribute higher numbers of female military troops to peacekeeping missions. While the research supports hypotheses related to gender equality and political representation, it finds no significant evidence that a country's history of conflict directly influences the deployment of female military troops in other conflict-ridden nations. This research contributes valuable insights into gender equality within peacekeeping forces, shedding light on factors influencing the deployment of female military personnel. The implications underscore the importance of actively addressing discrimination, promoting women's political participation, and understanding the influence of a nation's conflict history. The interdisciplinary nature of this work calls for collaborative efforts from policymakers, international organization, and researchers to formulate strategies for effectively increasing female military troops participation in UN peacekeepingKeywords: UN peacekeeping, gender diversity, female military troops, discrimination
Procedia PDF Downloads 51527 Gender-Transformative Education: A Pathway to Nourishing and Evolving Gender Equality in the Higher Education of Iran
Authors: Sepideh Mirzaee
Abstract:
Gender-transformative (G-TE) education is a challenging concept in the field of education and it is a matter of hot debate in the contemporary world. Paulo Freire as the prominent advocate of transformative education considers it as an alternative to conventional banking model of education. Besides, a more inclusive concept has been introduced, namely, G-TE, as an unbiased education fostering an environment of gender justice. As its main tenet, G-TE eliminates obstacles to education and improves social shifts. A plethora of contemporary research indicates that G-TE could completely revolutionize education systems by displacing inequalities and changing gender stereotypes. Despite significant progress in female education and its effects on gender equality in Iran, challenges persist. There are some deficiencies regarding gender disparities in the society and, education, specifically. As an example, the number of women with university degrees is on the rise; thus, there will be an increasing demand for employment in the society by them. Instead, many job opportunities remain occupied by men and it is seen as intolerable for the society to assign such occupations to women. In fact, Iran is regarded as a patriarchal society where educational contexts can play a critical role to assign gender ideology to its learners. Thus, such gender ideologies in the education can become the prevailing ideologies in the entire society. Therefore, improving education in this regard, can lead to a significant change in a society subsequently influencing the status of women not only within their own country but also on a global scale. Notably, higher education plays a vital role in this empowerment and social change. Particularly higher education can have a crucial part in imparting gender neutral ideologies to its learners and bringing about substantial change. It has the potential to alleviate the detrimental effects of gender inequalities. Therefore, this study aims to conceptualize the pivotal role of G-TE and its potential power in developing gender equality within the higher educational system of Iran presented within a theoretical framework. The study emphasizes the necessity of stablishing a theoretical grounding for citizenship, and transformative education while distinguishing gender related issues including gender equality, equity and parity. This theoretical foundation will shed lights on the decisions made by policy-makers, syllabus designers, material developers, and specifically professors and students. By doing so, they will be able to promote and implement gender equality recognizing the determinants, obstacles, and consequences of sustaining gender-transformative approaches in their classes within the Iranian higher education system. The expected outcomes include the eradication of gender inequality, transformation of gender stereotypes and provision of equal opportunities for both males and females in education.Keywords: citizenship education, gender inequality, higher education, patriarchal society, transformative education
Procedia PDF Downloads 65526 Mitigating the Negative Health Effects from Stress - A Social Network Analysis
Authors: Jennifer A. Kowalkowski
Abstract:
Production agriculture (farming) is a physically, emotionally, and cognitively stressful occupation, where workers have little control over the stressors that impact both their work and their lives. In an occupation already rife with hazards, these occupational-related stressors have been shown to increase farm workers’ risks for illness, injury, disability, and death associated with their work. Despite efforts to mitigate the negative health effects from occupational-related stress (ORS) and to promote health and well-being (HWB) among farmers in the US, marked improvements have not been attained. Social support accessed through social networks has been shown to buffer against the negative health effects from stress, yet no studies have directly examined these relationships among farmers. The purpose of this study was to use social network analysis to explore the social networks of farm owner-operators and the social supports available to them for mitigating the negative health effects of ORS. A convenience sample of 71 farm owner-operators from a Midwestern County in the US completed and returned a mailed survey (55.5% response rate) that solicited information about their social networks related to ORS. Farmers reported an average of 2.4 individuals in their personal networks and higher levels of comfort discussing ORS with female network members. Farmers also identified few connections (3.4% density) and indicated low comfort with members of affiliation networks specific to ORS. Findings from this study highlighted that farmers accessed different social networks and resources for their personal HWB than for issues related to occupational(farm-related) health and safety. In addition, farmers’ social networks for personal HWB were smaller, with different relational characteristics than reported in studies of farmers’ social networks related to occupational health and safety. Collectively, these findings suggest that farmers conceptualize personal HWB differently than farm health and safety. Therefore, the same research approaches and targets that guide occupational health and safety research may not be appropriate for personal HWB for farmers. Interventions and programming targeting ORS and HWB have largely been offered through the same platforms or mechanisms as occupational health and safety programs. This may be attributed to the significant overlap between the farm as a family business and place of residence, or that ORS stems from farm-related issues. However, these assumptions translated to health research of farmers and farm families from the occupational health and safety literature have not been directly studied or challenged. Thismay explain why past interventions have not been effective at improving health outcomes for farmers and farm families. A close examination of findings from this study raises important questions for researchers who study agricultural health. Findings from this study have significant implications for future research agendas focused on addressing ORS, HWB, and health disparities for farmersand farm families.Keywords: agricultural health, occupational-related stress, social networks, well-being
Procedia PDF Downloads 109525 Explore and Reduce the Performance Gap between Building Modelling Simulations and the Real World: Case Study
Authors: B. Salehi, D. Andrews, I. Chaer, A. Gillich, A. Chalk, D. Bush
Abstract:
With the rapid increase of energy consumption in buildings in recent years, especially with the rise in population and growing economies, the importance of energy savings in buildings becomes more critical. One of the key factors in ensuring energy consumption is controlled and kept at a minimum is to utilise building energy modelling at the very early stages of the design. So, building modelling and simulation is a growing discipline. During the design phase of construction, modelling software can be used to estimate a building’s projected energy consumption, as well as building performance. The growth in the use of building modelling software packages opens the door for improvements in the design and also in the modelling itself by introducing novel methods such as building information modelling-based software packages which promote conventional building energy modelling into the digital building design process. To understand the most effective implementation tools, research projects undertaken should include elements of real-world experiments and not just rely on theoretical and simulated approaches. Upon review of the related studies undertaken, it’s evident that they are mostly based on modelling and simulation, which can be due to various reasons such as the more expensive and time-consuming nature of real-time data-based studies. Taking in to account the recent rise of building energy software modelling packages and the increasing number of studies utilising these methods in their projects and research, the accuracy and reliability of these modelling software packages has become even more crucial and critical. This Energy Performance Gap refers to the discrepancy between the predicted energy savings and the realised actual savings, especially after buildings implement energy-efficient technologies. There are many different software packages available which are either free or have commercial versions. In this study, IES VE (Integrated Environmental Solutions Virtual Environment) is used as it is a common Building Energy Modeling and Simulation software in the UK. This paper describes a study that compares real time results with those in a virtual model to illustrate this gap. The subject of the study is a north west facing north-west (345°) facing, naturally ventilated, conservatory within a domestic building in London is monitored during summer to capture real-time data. Then these results are compared to the virtual results of IES VE, which is a commonly used building energy modelling and simulation software in the UK. In this project, the effect of the wrong position of blinds on overheating is studied as well as providing new evidence of Performance Gap. Furthermore, the challenges of drawing the input of solar shading products in IES VE will be considered.Keywords: building energy modelling and simulation, integrated environmental solutions virtual environment, IES VE, performance gap, real time data, solar shading products
Procedia PDF Downloads 139524 The Effect of Data Integration to the Smart City
Authors: Richard Byrne, Emma Mulliner
Abstract:
Smart cities are a vision for the future that is increasingly becoming a reality. While a key concept of the smart city is the ability to capture, communicate, and process data that has long been produced through day-to-day activities of the city, much of the assessment models in place neglect this fact to focus on ‘smartness’ concepts. Although it is true technology often provides the opportunity to capture and communicate data in more effective ways, there are also human processes involved that are just as important. The growing importance with regards to the use and ownership of data in society can be seen by all with companies such as Facebook and Google increasingly coming under the microscope, however, why is the same scrutiny not applied to cities? The research area is therefore of great importance to the future of our cities here and now, while the findings will be of just as great importance to our children in the future. This research aims to understand the influence data is having on organisations operating throughout the smart cities sector and employs a mixed-method research approach in order to best answer the following question: Would a data-based evaluation model for smart cities be more appropriate than a smart-based model in assessing the development of the smart city? A fully comprehensive literature review concluded that there was a requirement for a data-driven assessment model for smart cities. This was followed by a documentary analysis to understand the root source of data integration to the smart city. A content analysis of city data platforms enquired as to the alternative approaches employed by cities throughout the UK and draws on best practice from New York to compare and contrast. Grounded in theory, the research findings to this point formulated a qualitative analysis framework comprised of: the changing environment influenced by data, the value of data in the smart city, the data ecosystem of the smart city and organisational response to the data orientated environment. The framework was applied to analyse primary data collected through the form of interviews with both public and private organisations operating throughout the smart cities sector. The work to date represents the first stage of data collection that will be built upon by a quantitative research investigation into the feasibility of data network effects in the smart city. An analysis into the benefits of data interoperability supporting services to the smart city in the areas of health and transport will conclude the research to achieve the aim of inductively forming a framework that can be applied to future smart city policy. To conclude, the research recognises the influence of technological perspectives in the development of smart cities to date and highlights this as a challenge to introduce theory applied with a planning dimension. The primary researcher has utilised their experience working in the public sector throughout the investigation to reflect upon what is perceived as a gap in practice of where we are today, to where we need to be tomorrow.Keywords: data, planning, policy development, smart cities
Procedia PDF Downloads 312523 Strategies for Incorporating Intercultural Intelligence into Higher Education
Authors: Hyoshin Kim
Abstract:
Most post-secondary educational institutions have offered a wide variety of professional development programs and resources in order to advance the quality of education. Such programs are designed to support faculty members by focusing on topics such as course design, behavioral learning objectives, class discussion, and evaluation methods. These are based on good intentions and might help both new and experienced educators. However, the fundamental flaw is that these ‘effective methods’ are assumed to work regardless of what we teach and whom we teach. This paper is focused on intercultural intelligence and its application to education. It presents a comprehensive literature review on context and cultural diversity in terms of beliefs, values and worldviews. What has worked well with a group of homogeneous local students may not work well with more diverse and international students. It is because students hold different notions of what is means to learn or know something. It is necessary for educators to move away from certain sets of generic teaching skills, which are based on a limited, particular view of teaching and learning. The main objective of the research is to expand our teaching strategies by incorporating what students bring to the course. There have been a growing number of resources and texts on teaching international students. Unfortunately, they tend to be based on the deficiency model, which treats diversity not as strengths, but as problems to be solved. This view is evidenced by the heavy emphasis on assimilationist approaches. For example, cultural difference is negatively evaluated, either implicitly or explicitly. Therefore the pressure is on culturally diverse students. The following questions reflect the underlying assumption of deficiencies: - How can we make them learn better? - How can we bring them into the mainstream academic culture?; and - How can they adapt to Western educational systems? Even though these questions may be well-intended, there seems to be something fundamentally wrong as the assumption of cultural superiority is embedded in this kind of thinking. This paper examines how educators can incorporate intercultural intelligence into the course design by utilizing a variety of tools such as pre-course activities, peer learning and reflective learning journals. The main goal is to explore ways to engage diverse learners in all aspects of learning. This can be achieved by activities designed to understand their prior knowledge, life experiences, and relevant cultural identities. It is crucial to link course material to students’ diverse interests thereby enhancing the relevance of course content and making learning more inclusive. Internationalization of higher education can be successful only when cultural differences are respected and celebrated as essential and positive aspects of teaching and learning.Keywords: intercultural competence, intercultural intelligence, teaching and learning, post-secondary education
Procedia PDF Downloads 212522 A Comparison and Discussion of Modern Anaesthetic Techniques in Elective Lower Limb Arthroplasties
Authors: P. T. Collett, M. Kershaw
Abstract:
Introduction: The discussion regarding which method of anesthesia provides better results for lower limb arthroplasty is a continuing debate. Multiple meta-analysis has been performed with no clear consensus. The current recommendation is to use neuraxial anesthesia for lower limb arthroplasty; however, the evidence to support this decision is weak. The Enhanced Recovery After Surgery (ERAS) society has recommended, either technique can be used as part of a multimodal anesthetic regimen. A local study was performed to see if the current anesthetic practice correlates with the current recommendations and to evaluate the efficacy of the different techniques utilized. Method: 90 patients who underwent total hip or total knee replacements at Nevill Hall Hospital between February 2019 to July 2019 were reviewed. Data collected included the anesthetic technique, day one opiate use, pain score, and length of stay. The data was collected from anesthetic charts, and the pain team follows up forms. Analysis: The average of patients undergoing lower limb arthroplasty was 70. Of those 83% (n=75) received a spinal anaesthetic and 17% (n=15) received a general anaesthetic. For patients undergoing knee replacement under general anesthetic the average day, one pain score was 2.29 and 1.94 if a spinal anesthetic was performed. For hip replacements, the scores were 1.87 and 1.8, respectively. There was no statistical significance between these scores. Day 1 opiate usage was significantly higher in knee replacement patients who were given a general anesthetic (45.7mg IV morphine equivalent) vs. those who were operated on under spinal anesthetic (19.7mg). This difference was not noticeable in hip replacement patients. There was no significant difference in length of stay between the two anesthetic techniques. Discussion: There was no significant difference in the day one pain score between the patients who received a general or spinal anesthetic for either knee or hip replacements. The higher pain scores in the knee replacement group overall are consistent with this being a more painful procedure. This is a small patient population, which means any difference between the two groups is unlikely to be representative of a larger population. The pain scale has 4 points, which means it is difficult to identify a significant difference between pain scores. Conclusion: There is currently little standardization between the different anesthetic approaches utilized in Nevill Hall Hospital. This is likely due to the lack of adherence to a standardized anesthetic regimen. In accordance with ERAS recommends a standard anesthetic protocol is a core component. The results of this study and the guidance from the ERAS society will support the implementation of a new health board wide ERAS protocol.Keywords: anaesthesia, orthopaedics, intensive care, patient centered decision making, treatment escalation
Procedia PDF Downloads 128521 Plastic Waste Sorting by the People of Dakar
Authors: E. Gaury, P. Mandausch, O. Picot, A. R. Thomas, L. Veisblat, L. Ralambozanany, C. Delsart
Abstract:
In Dakar, demographic and spatial growth was accompanied by a 50% increase in household waste between 1988 and 2008 in the city. In addition, a change in the nature of household waste was observed between 1990 and 2007. The share of plastic increased by 15% between 2004 and 2007 in Dakar. Plastics represent the seventh category of household waste, the most produced per year in Senegal. The share of plastic in household and similar waste is 9% in Senegal. Waste management in the city of Dakar is a complex process involving a multitude of formal and informal actors with different perceptions and objectives. The objective of this study was to understand the motivations that could lead to sorting action, as well as the perception of plastic waste sorting within the Dakar population (households and institutions). The problematic of this study was as follows: what may be the factors playing a role in the sorting action? In an attempt to answer this, two approaches have been developed: (1) An exploratory qualitative study by semi-structured interviews with two groups of individuals concerned by the sorting of plastic waste: on the one hand, the experts in charge of waste management and on the other the households-producers of waste plastics. This study served as the basis for formulating the hypotheses and thus for the quantitative analysis. (2) A quantitative study using a questionnaire survey method among households producing plastic waste in order to test the previously formulated hypotheses. The objective was to have quantitative results representative of the population of Dakar in relation to the behavior and the process inherent in the adoption of the plastic waste sorting action. The exploratory study shows that the perception of state responsibility varies between institutions and households. Public institutions perceive this as a shared responsibility because the problem of plastic waste affects many sectors (health, environmental education, etc.). Their involvement is geared more towards raising awareness and educating young people. As state action is limited, the emergence of private companies in this sector seems logical as they are setting up collection networks to develop a recycling activity. The state plays a moral support role in these activities and encourages companies to do more. The study of the understanding of the action of sorting plastic waste by the population of Dakar through a quantitative analysis was able to demonstrate the attitudes and constraints inherent in the adoption of plastic waste sorting.Cognitive attitude, knowledge, and visible consequences have been shown to correlate positively with sorting behavior. Thus, it would seem that the population of Dakar is more sensitive to what they see and what they know to adopt sorting behavior.It has also been shown that the strongest constraints that could slow down sorting behavior were the complexity of the process, too much time and the lack of infrastructure in which to deposit plastic waste.Keywords: behavior, Dakar, plastic waste, waste management
Procedia PDF Downloads 97520 Experimental Study of Impregnated Diamond Bit Wear During Sharpening
Authors: Rui Huang, Thomas Richard, Masood Mostofi
Abstract:
The lifetime of impregnated diamond bits and their drilling efficiency are in part governed by the bit wear conditions, not only the extent of the diamonds’ wear but also their exposure or protrusion out of the matrix bonding. As much as individual diamonds wear, the bonding matrix does also wear through two-body abrasion (direct matrix-rock contact) and three-body erosion (cuttings trapped in the space between rock and matrix). Although there is some work dedicated to the study of diamond bit wear, there is still a lack of understanding on how matrix erosion and diamond exposure relate to the bit drilling response and drilling efficiency, as well as no literature on the process that governs bit sharpening a procedure commonly implemented by drillers when the extent of diamond polishing yield extremely low rate of penetration. The aim of this research is (i) to derive a correlation between the wear state of the bit and the drilling performance but also (ii) to gain a better understanding of the process associated with tool sharpening. The research effort combines specific drilling experiments and precise mapping of the tool-cutting face (impregnated diamond bits and segments). Bit wear is produced by drilling through a rock sample at a fixed rate of penetration for a given period of time. Before and after each wear test, the bit drilling response and thus efficiency is mapped out using a tailored design experimental protocol. After each drilling test, the bit or segment cutting face is scanned with an optical microscope. The test results show that, under the fixed rate of penetration, diamond exposure increases with drilling distance but at a decreasing rate, up to a threshold exposure that corresponds to the optimum drilling condition for this feed rate. The data further shows that the threshold exposure scale with the rate of penetration up to a point where exposure reaches a maximum beyond which no more matrix can be eroded under normal drilling conditions. The second phase of this research focuses on the wear process referred as bit sharpening. Drillers rely on different approaches (increase feed rate or decrease flow rate) with the aim of tearing worn diamonds away from the bit matrix, wearing out some of the matrix, and thus exposing fresh sharp diamonds and recovering a higher rate of penetration. Although a common procedure, there is no rigorous methodology to sharpen the bit and avoid excessive wear or bit damage. This paper aims to gain some insight into the mechanisms that accompany bit sharpening by carefully tracking diamond fracturing, matrix wear, and erosion and how they relate to drilling parameters recorded while sharpening the tool. The results show that there exist optimal conditions (operating parameters and duration of the procedure) for sharpening that minimize overall bit wear and that the extent of bit sharpening can be monitored in real-time.Keywords: bit sharpening, diamond exposure, drilling response, impregnated diamond bit, matrix erosion, wear rate
Procedia PDF Downloads 99519 Four Museums for One (Hi) Story
Authors: Sheyla Moroni
Abstract:
A number of scholars around the world have analyzed the great architectural and urban planning revolution proposed by Skopje 2014, but so far, there are no readings of the parallels between the museums in the Balkan area (including Greece) that share the same name as the museum at the center of that political and cultural revolution. In the former FYROM (now renamed North Macedonia), a museum called "Macedonian Struggle" was born during the reconstruction of the city of Skopje as the new "national" capital. This new museum was built under the "Skopje 2014" plan and cost about 560 million euros (1/3 of the country's GDP). It has been a "flagship" of the government of Nikola Gruevski, leader of the nationalist VMRO-DPMNE party. Until 2016 this museum was close to the motivations of the Macedonian nationalist movement (and later party) active (including terrorist actions) during the 19th and 20th centuries. The museum served to narrate a new "nation-building" after "state-building" had already taken place. But there are three other museums that tell the story of the "Macedonian struggle" by understanding "Macedonia" as a territory other than present-day North Macedonia. The first one is located in Thessaloniki and primarily commemorates the "Greek battle" against the Ottoman Empire. While the first uses a new dark building and many reconstructed rooms and shows the bloody history of the quest for "freedom" for the Macedonian language and people (different from Greeks, Albanians, and Bulgarians), the second is located in an old building in Thessaloniki and in its six rooms on the ground floor graphically illustrates the modern and contemporary history of Greek Macedonia. There are also third and fourth museums: in Kastoria (toward the Albanian border) and in Chromio (near the Greek-North Macedonian border). These two museums (Kastoria and Chromio) are smaller, but they mark two important borders for the (Greek) regions bordering Albania to the east and dividing it to the northwest not only from the Ottoman past but also from two communities felt to be "foreign" (Albanians and former Yugoslav Macedonians). All museums reconstruct a different "national edifice" and emphasize the themes of language and religion. The objective of the research is to understand, through four museums bearing the same name, what are the main "mental boundaries" (religious, linguistic, cultural) of the different states (reconstructed between the late 19th century and 1991). Both classical historiographic methodology (very different between Balkan and "Western" areas) and on-site observation and interactions with different sites are used in this research. An attempt is made to highlight four different political focuses with respect to nation-building and the Public History (and/or propaganda) approaches applied in the construction of these buildings and memorials tendency often that one "defines" oneself by differences from "others" (even if close).Keywords: nationalisms, museum, nation building, public history
Procedia PDF Downloads 86518 Rehabilitation Team after Brain Damages as Complex System Integrating Consciousness
Authors: Olga Maksakova
Abstract:
A work with unconscious patients after acute brain damages besides special knowledge and practical skills of all the participants requires a very specific organization. A lot of said about team approach in neurorehabilitation, usually as for outpatient mode. Rehabilitologists deal with fixed patient problems or deficits (motion, speech, cognitive or emotional disorder). Team-building means superficial paradigm of management psychology. Linear mode of teamwork fits casual relationships there. Cases with deep altered states of consciousness (vegetative states, coma, and confusion) require non-linear mode of teamwork: recovery of consciousness might not be the goal due to phenomenon uncertainty. Rehabilitation team as Semi-open Complex System includes the patient as a part. Patient's response pattern becomes formed not only with brain deficits but questions-stimuli, context, and inquiring person. Teamwork is sourcing of phenomenology knowledge of patient's processes as Third-person approach is replaced with Second- and after First-person approaches. Here is a chance for real-time change. Patient’s contacts with his own body and outward things create a basement for restoration of consciousness. The most important condition is systematic feedbacks to any minimal movement or vegetative signal of the patient. Up to now, recovery work with the most severe contingent is carried out in the mode of passive physical interventions, while an effective rehabilitation team should include specially trained psychologists and psychotherapists. It is they who are able to create a network of feedbacks with the patient and inter-professional ones building up the team. Characteristics of ‘Team-Patient’ system (TPS) are energy, entropy, and complexity. Impairment of consciousness as the absence of linear contact appears together with a loss of essential functions (low energy), vegetative-visceral fits (excessive energy and low order), motor agitation (excessive energy and excessive order), etc. Techniques of teamwork are different in these cases for resulting optimization of the system condition. Directed regulation of the system complexity is one of the recovery tools. Different signs of awareness appear as a result of system self-organization. Joint meetings are an important part of teamwork. Regular or event-related discussions form the language of inter-professional communication, as well as the patient's shared mental model. Analysis of complex communication process in TPS may be useful for creation of the general theory of consciousness.Keywords: rehabilitation team, urgent rehabilitation, severe brain damage, consciousness disorders, complex system theory
Procedia PDF Downloads 147517 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 158516 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research
Authors: Christoph Opperer
Abstract:
Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images
Procedia PDF Downloads 77515 Ethical Artificial Intelligence: An Exploratory Study of Guidelines
Authors: Ahmad Haidar
Abstract:
The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI
Procedia PDF Downloads 96514 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches
Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang
Abstract:
Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis
Procedia PDF Downloads 403513 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 124512 The Effectiveness of Multiphase Flow in Well- Control Operations
Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia
Abstract:
Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic
Procedia PDF Downloads 119511 The One, the Many, and the Doctrine of Divine Simplicity: Variations on Simplicity in Essentialist and Existentialist Metaphysics
Authors: Mark Wiebe
Abstract:
One of the tasks contemporary analytic philosophers have focused on (e.g., Wolterstorff, Alston, Plantinga, Hasker, and Crisp) is the analysis of certain medieval metaphysical frameworks. This growing body of scholarship has helped clarify and prevent distorted readings of medieval and ancient writers. However, as scholars like Dolezal, Duby, and Brower have pointed out, these analyses have been incomplete or inaccurate in some instances, e.g., with regard to analogical speech or the doctrine of divine simplicity (DDS). Additionally, contributors to this work frequently express opposing claims or fail to note substantial differences between ancient and medieval thinkers. This is the case regarding the comparison between Thomas Aquinas and others. Anton Pegis and Étienne Gilson have argued along this line that Thomas’ metaphysical framework represents a fundamental shift. Gilson describes Thomas’ metaphysics as a turn from a form of “essentialism” to “existentialism.” One should argue that this shift distinguishes Thomas from many Analytic philosophers as well as from other classical defenders of the DDS. Moreover, many of the objections Analytic Philosophers make against Thomas presume the same metaphysical principles undergirding the above-mentioned form of essentialism. This weakens their force against Thomas’ positions. In order to demonstrate these claims, it will be helpful to consider Thomas’ metaphysical outlook alongside that of two other prominent figures: Augustine and Ockham. One area of their thinking which brings their differences to the surface has to do with how each relates to Platonic and Neo-Platonic thought. More specifically, it is illuminating to consider whether and how each distinguishes or conceives essence and existence. It is also useful to see how each approaches the Platonic conflicts between essence and individuality, unity and intelligibility. In both of these areas, Thomas stands out from Augustine and Ockham. Although Augustine and Ockham diverge in many ways, both ultimately identify being with particularity and pit particularity against both unity and intelligibility. Contrastingly, Thomas argues that being is distinct from and prior to essence. Being (i.e., Being in itself) rather than essence or form must therefore serve as the ground and ultimate principle for the existence of everything in which being and essence are distinct. Additionally, since change, movement, and addition improve and give definition to finite being, multitude and distinction are, therefore, principles of being rather than non-being. Consequently, each creature imitates and participates in God’s perfect Being in its own way; the perfection of each genus exists pre-eminently in God without being at odds with God’s simplicity, God has knowledge, power, and will, and these and the many other terms assigned to God refer truly to the being of God without being either meaningless or synonymous. The existentialist outlook at work in these claims distinguishes Thomas in a noteworthy way from his contemporaries and predecessors as much as it does from many of the analytic philosophers who have objected to his thought. This suggests that at least these kinds of objections do not apply to Thomas’ thought.Keywords: theology, philosophy of religion, metaphysics, philosophy
Procedia PDF Downloads 75510 Bacteriophage Is a Novel Solution of Therapy Against S. aureus Having Multiple Drug Resistance
Authors: Sanjay Shukla, A. Nayak, R. K. Sharma, A. P. Singh, S. P. Tiwari
Abstract:
Excessive use of antibiotics is a major problem in the treatment of wounds and other chronic infections, and antibiotic treatment is frequently non-curative, thus alternative treatment is necessary. Phage therapy is considered one of the most promising approaches to treat multi-drug resistant bacterial pathogens. Infections caused by Staphylococcus aureus are very efficiently controlled with phage cocktails, containing a different individual phages lysate infecting a majority of known pathogenic S. aureus strains. The aim of the present study was to evaluate the efficacy of a purified phage cocktail for prophylactic as well as therapeutic application in mouse model and in large animals with chronic septic infection of wounds. A total of 150 sewage samples were collected from various livestock farms. These samples were subjected for the isolation of bacteriophage by the double agar layer method. A total of 27 sewage samples showed plaque formation by producing lytic activity against S. aureus in the double agar overlay method out of 150 sewage samples. In TEM, recovered isolates of bacteriophages showed hexagonal structure with tail fiber. In the bacteriophage (ØVS) had an icosahedral symmetry with the head size 52.20 nm in diameter and long tail of 109 nm. Head and tail were held together by connector and can be classified as a member of the Myoviridae family under the order of Caudovirale. Recovered bacteriophage had shown the antibacterial activity against the S. aureus in vitro. Cocktail (ØVS1, ØVS5, ØVS9, and ØVS 27) of phage lysate were tested to know in vivo antibacterial activity as well as the safety profile. Result of mice experiment indicated that the bacteriophage lysate were very safe, did not show any appearance of abscess formation, which indicates its safety in living system. The mice were also prophylactically protected against S. aureus when administered with cocktail of bacteriophage lysate just before the administration of S. aureuswhich indicates that they are good prophylactic agent. The S. aureusinoculated mice were completely recovered by bacteriophage administration with 100% recovery, which was very good as compere to conventional therapy. In the present study, ten chronic cases of the wound were treated with phage lysate, and follow up of these cases was done regularly up to ten days (at 0, 5, and 10 d). The result indicated that the six cases out of ten showed complete recovery of wounds within 10 d. The efficacy of bacteriophage therapy was found to be 60% which was very good as compared to the conventional antibiotic therapy in chronic septic wounds infections. Thus, the application of lytic phage in single dose proved to be innovative and effective therapy for the treatment of septic chronic wounds.Keywords: phage therapy, S aureus, antimicrobial resistance, lytic phage, and bacteriophage
Procedia PDF Downloads 117509 Teacher Professional Development in Saudi Arabia through the Implementation of Universal Design for Learning
Authors: Majed A. Alsalem
Abstract:
Universal Design for Learning (UDL) is common theme in education across the US and an influential model and framework that enables students in general and particularly students who are deaf and hard of hearing (DHH) to access the general education curriculum. UDL helps teachers determine how information will be presented to students and how to keep students engaged. Moreover, UDL helps students to express their understanding and knowledge to others. UDL relies on technology to promote students' interaction with content and their communication of knowledge. This study included 120 DHH students who received daily instruction based on UDL principles. This study presents the results of the study and discusses its implications for the integration of UDL in day-to-day practice as well as in the country's education policy. UDL is a Western concept that began and grew in the US, and it has just begun to transfer to other countries such as Saudi Arabia. It will be very important to researchers, practitioners, and educators to see how UDL is being implemented in a new place with a different culture. UDL is a framework that is built to provide multiple means of engagement, representation, and action and expression that should be part of curricula and lessons for all students. The purpose of this study is to investigate the variables associated with the implementation of UDL in Saudi Arabian schools and identify the barriers that could prevent the implementation of UDL. Therefore, this study used a mixed methods design that use both quantitative and qualitative methods. More insights will be gained by including both quantitative and qualitative rather than using a single method. By having methods that different concepts and approaches, the databases will be enriched. This study uses levels of collecting date through two stages in order to insure that the data comes from multiple ways to mitigate validity threats and establishing trustworthiness in the findings. The rationale and significance of this study is that it will be the first known research that targets UDL in Saudi Arabia. Furthermore, it will deal with UDL in depth to set the path for further studies in the Middle East. From a perspective of content, this study considers teachers’ implementation knowledge, skills, and concerns of implementation. This study deals with effective instructional designs that have not been presented in any conferences, workshops, teacher preparation and professional development programs in Saudi Arabia. Specifically, Saudi Arabian schools are challenged to design inclusive schools and practices as well as to support all students’ academic skills development. The total participants in stage one were 336 teachers of DHH students. The results of the intervention indicated significant differences among teachers before and after taking the training sessions associated with their understanding and level of concern. Teachers have indicated interest in knowing more about UDL and adopting it into their practices; they reported that UDL has benefits that will enhance their performance for supporting student learning.Keywords: deaf and hard of hearing, professional development, Saudi Arabia, universal design for learning
Procedia PDF Downloads 432508 Evaluating the Benefits of Intelligent Acoustic Technology in Classrooms: A Case Study
Authors: Megan Burfoot, Ali GhaffarianHoseini, Nicola Naismith, Amirhosein GhaffarianHoseini
Abstract:
Intelligent Acoustic Technology (IAT) is a novel architectural device used in buildings to automatically vary the acoustic conditions of space. IAT is realized by integrating two components: Variable Acoustic Technology (VAT) and an intelligent system. The VAT passively alters the RT by changing the total sound absorption in a room. In doing so, the Reverberation Time (RT) is changed and thus, the sound strength and clarity are altered. The intelligent system detects sound waves in real-time to identify the aural situation, and the RT is adjusted accordingly based on pre-programmed algorithms. IAT - the synthesis of these two components - can dramatically improve acoustic comfort, as the acoustic condition is automatically optimized for any detected aural situation. This paper presents an evaluation of the improvements of acoustic comfort in an existing tertiary classroom located at Auckland University of Technology in New Zealand. This is a pilot case study, the first of its’ kind attempting to quantify the benefits of IAT. Naturally, the potential acoustic improvements from IAT can be actualized by only installing the VAT component of IAT and by manually adjusting it rather than utilizing an intelligent system. Such a simplified methodology is adopted for this case study to understand the potential significance of IAT without adopting a time and cost-intensive strategy. For this study, the VAT is built by overlaying reflective, rotating louvers over sound absorption panels. RT's are measured according to international standards before and after installing VAT in the classroom. The louvers are manually rotated in increments by the experimenter and further RT measurements are recorded. The results are compared with recommended guidelines and reference values from national standards for spaces intended for speech and communication. The results obtained from the measurements are used to quantify the potential improvements in classroom acoustic comfort, where IAT to be used. This evaluation reveals the current existence of poor acoustic conditions in the classroom caused by high RT's. The poor acoustics are also largely attributed to the classrooms’ inability to vary acoustic parameters for changing aural situations. The classroom experiences one static acoustic state, neglecting to recognize the nature of classrooms as flexible, dynamic spaces. Evidently, when using VAT the classroom is prescribed with a wide range of RTs it can achieve. Namely, acoustic requirements for varying teaching approaches are satisfied, and acoustic comfort is improved. By quantifying the benefits of using VAT, it can confidently suggest these same benefits are achieved with IAT. Nevertheless, it is encouraged that future studies continue this line of research toward the eventual development of IAT and its’ acceptance into mainstream architecture.Keywords: acoustic comfort, classroom acoustics, intelligent acoustics, variable acoustics
Procedia PDF Downloads 189507 Spatial Deictics in Face-to-Face Communication: Findings in Baltic Languages
Authors: Gintare Judzentyte
Abstract:
The present research is aimed to discuss semantics and pragmatics of spatial deictics (deictic adverbs of place and demonstrative pronouns) in the Baltic languages: in spoken Lithuanian and in spoken Latvian. The following objectives have been identified to achieve the aim: 1) to determine the usage of adverbs of place in spoken Lithuanian and Latvian and to verify their meanings in face-to-face communication; 2) to determine the usage of demonstrative pronouns in spoken Lithuanian and Latvian and to verify their meanings in face-to-face communication; 3) to compare the systems between the two spoken languages and to identify the main tendencies. As meanings of demonstratives (adverbs of place and demonstrative pronouns) are context-bound, it is necessary to verify their usage in spontaneous interaction. Besides, deictic gestures play a very important role in face-to-face communication. Therefore, an experimental method is necessary to collect the data. Video material representing spoken Lithuanian and spoken Latvian was recorded by means of the method of a qualitative interview (a semi-structured interview: an empirical research is all about asking right questions). The collected material was transcribed and evaluated taking into account several approaches: 1) physical distance (location of the referent, visual accessibility of the referent); 2) deictic gestures (the combination of language and gesture is especially characteristic of the exophoric use); 3) representation of mental spaces in physical space (a speaker sometimes wishes to mark something that is psychically close as psychologically distant and vice versa). The research of the collected data revealed that in face-to-face communication the participants choose deictic adverbs of place instead of demonstrative pronouns to locate/identify entities in situations where the demonstrative pronouns would be expected in spoken Lithuanian and in spoken Latvian. The analysis showed that visual accessibility of the referent is very important in face-to-face communication, but the main criterion while localizing objects and entities is the need for contrast: lith. čia ‘here’, šis ‘this’, latv. šeit ‘here’, šis ‘this’ usually identify distant entities and are used instead of distal demonstratives (lith. ten ‘there’, tas ‘that’, latv. tur ‘there’, tas ‘that’), because the referred objects/subjects contrast to further entities. Furthermore, the interlocutors in examples from a spontaneously situated interaction usually extend their space and can refer to a ‘distal’ object/subject with a ‘proximal’ demonstrative based on the psychological choice. As the research of the spoken Baltic languages confirmed, the choice of spatial deictics in face-to-face communication is strongly effected by a complex of criteria. Although there are some main tendencies, the exact meaning of spatial deictics in the spoken Baltic languages is revealed and is relevant only in a certain context.Keywords: Baltic languages, face-to-face communication, pragmatics, semantics, spatial deictics
Procedia PDF Downloads 290506 A Visualization Classification Method for Identifying the Decayed Citrus Fruit Infected by Fungi Based on Hyperspectral Imaging
Authors: Jiangbo Li, Wenqian Huang
Abstract:
Early detection of fungal infection in citrus fruit is one of the major problems in the postharvest commercialization process. The automatic and nondestructive detection of infected fruits is still a challenge for the citrus industry. At present, the visual inspection of rotten citrus fruits is commonly performed by workers through the ultraviolet induction fluorescence technology or manual sorting in citrus packinghouses to remove fruit subject with fungal infection. However, the former entails a number of problems because exposing people to this kind of lighting is potentially hazardous to human health, and the latter is very inefficient. Orange is used as a research object. This study would focus on this problem and proposed an effective method based on Vis-NIR hyperspectral imaging in the wavelength range of 400-1000 nm with a spectroscopic resolution of 2.8 nm. In this work, three normalization approaches are applied prior to analysis to reduce the effect of sample curvature on spectral profiles, and it is found that mean normalization was the most effective pretreatment for decreasing spectral variability due to curvature. Then, principal component analysis (PCA) was applied to a dataset composing of average spectra from decayed and normal tissue to reduce the dimensionality of data and observe the ability of Vis-NIR hyper-spectra to discriminate data from two classes. In this case, it was observed that normal and decayed spectra were separable along the resultant first principal component (PC1) axis. Subsequently, five wavelengths (band) centered at 577, 702, 751, 808, and 923 nm were selected as the characteristic wavelengths by analyzing the loadings of PC1. A multispectral combination image was generated based on five selected characteristic wavelength images. Based on the obtained multispectral combination image, the intensity slicing pseudocolor image processing method is used to generate a 2-D visual classification image that would enhance the contrast between normal and decayed tissue. Finally, an image segmentation algorithm for detection of decayed fruit was developed based on the pseudocolor image coupled with a simple thresholding method. For the investigated 238 independent set samples including infected fruits infected by Penicillium digitatum and normal fruits, the total success rate is 100% and 97.5%, respectively, and, the proposed algorithm also used to identify the orange infected by penicillium italicum with a 100% identification accuracy, indicating that the proposed multispectral algorithm here is an effective method and it is potential to be applied in citrus industry.Keywords: citrus fruit, early rotten, fungal infection, hyperspectral imaging
Procedia PDF Downloads 304