Search results for: nano technology
1002 Medical Ethics in the Hospital: Towards Quality Ethics Consultation
Authors: Dina Siniora, Jasia Baig
Abstract:
During the past few decades, the healthcare system has undergone profound changes in their healthcare decision-making competencies and moral aptitudes due to the vast advancement in technology, clinical skills, and scientific knowledge. Healthcare decision-making deals with morally contentious dilemmas ranging from illness, life and death judgments that require sensitivity and awareness towards the patient’s preferences while taking into consideration medicine’s abilities and boundaries. As the ever-evolving field of medicine continues to become more scientifically and morally multifarious; physicians and the hospital administrators increasingly rely on ethics committees to resolve problems that arise in everyday patient care. The role and latitude of responsibilities of ethics committees which includes being dispute intermediaries, moral analysts, policy educators, counselors, advocates, and reviewers; suggest the importance and effectiveness of a fully integrated committee. Despite achievements on Integrated Ethics and progress in standards and competencies, there is an imminent necessity for further improvement in quality within ethics consultation services in areas of credentialing, professionalism and standards of quality, as well as the quality of healthcare throughout the system. These concerns can be resolved first by collecting data about particular quality gaps and comprehend the level to which ethics committees are consistent with newly published ASBH quality standards. Policymakers should pursue improvement strategies that target both academic bioethics community and major stakeholders at hospitals, who directly influence ethics committees. This broader approach oriented towards education and intervention outcome in conjunction with preventive ethics to address disparities in quality on a systematic level. Adopting tools for improving competencies and processes within ethics consultation by implementing a credentialing process, upholding normative significance for the ASBH core competencies, advocating for professional Code of Ethics, and further clarifying the internal structures will improve productivity, patient satisfaction, and institutional integrity. This cannot be systemically achieved without a written certification exam for HCEC practitioners, credentialing and privileging HCEC practitioners at the hospital level, and accrediting HCEC services at the institutional level.Keywords: ethics consultation, hospital, medical ethics, quality
Procedia PDF Downloads 1891001 Advanced Deployable/Retractable Solar Panel System for Satellite Applications
Authors: Zane Brough, Claudio Paoloni
Abstract:
Modern low earth orbit (LEO) satellites that require multi-mission flexibility are highly likely to be repositioned between different operational orbits. While executing this process the satellite may experience high levels of vibration and environmental hazards, exposing the deployed solar panel to dangerous stress levels, fatigue and space debris, hence it is desirable to retract the solar array before satellite repositioning to avoid damage or failure. Furthermore, to accommodate for today's technological world, the power demand of a modern LEO satellite is rapidly increasing, which consequently provides pressure upon the design of the satellites solar array system to conform to the strict volume and mass limitations. A novel concept of deployable/retractable hybrid solar array system, aimed to provide a greater power to volume ratio while dramatically reducing the disadvantages of system mass and cost is proposed. Taking advantage of the new lightweight technology in solar panels, a mechanical system composed of both rigid and flexible solar panels arranged within a petal formation is proposed to yield a stowed to deployment area ratio up to at least 1:7, which improves the power density dramatically. The system consists of five subsystems, the outer ones based on a novel eight-petal configuration that provides a large surface and supports the flexible solar panels. A single cable and spool based hinge mechanism were designed to synchronously deploy/retract the panels in a safe, simple and efficient manner while the mass compared to the previous systems is considerably reduced. The relevant challenge to assure a smooth movement is resolved by a proper minimization of the gearing system and the use of a micro-controller system. A prototype was designed by 3D simulators and successfully constructed and tested. Further design works are in progress to implement an epicyclical gear hinge mechanism, which will further reduce the volume, mass and complexity of the system significantly. The proposed system due to an effective and reliable mechanism provides a large active surface, whilst being very compact. It could be extremely advantageous for use as ground portable solar panel system.Keywords: mechatronic engineering, satellite, solar panel, deployable/retractable mechanism
Procedia PDF Downloads 3781000 Monitoring Land Cover/Land Use Change in Rupandehi District by Optimising Remotely Sensed Image
Authors: Hritik Bhattarai
Abstract:
Land use and land cover play a crucial role in preserving and managing Earth's natural resources. Various factors, such as economic, demographic, social, cultural, technological, and environmental processes, contribute to changes in land use and land cover (LULC). Rupandehi District is significantly influenced by a combination of driving forces, including its geographical location, rapid population growth, economic opportunities, globalization, tourism activities, and political events. Urbanization and urban growth in the region have been occurring in an unplanned manner, with internal migration and natural population growth being the primary contributors. Internal migration, particularly from neighboring districts in the higher and lower Himalayan regions, has been high, leading to increased population growth and density. This study utilizes geospatial technology, specifically geographic information system (GIS), to analyze and illustrate the land cover and land use changes in the Rupandehi district for the years 2009 and 2019, using freely available Landsat images. The identified land cover categories include built-up area, cropland, Das-Gaja, forest, grassland, other woodland, riverbed, and water. The statistical analysis of the data over the 10-year period (2009-2019) reveals significant percentage changes in LULC. Notably, Das-Gaja shows a minimal change of 99.9%, while water and forest exhibit increases of 34.5% and 98.6%, respectively. Riverbed and built-up areas experience changes of 95.3% and 39.6%, respectively. Cropland and grassland, however, show concerning decreases of 102.6% and 140.0%, respectively. Other woodland also indicates a change of 50.6%. The most noteworthy trends are the substantial increase in water areas and built-up areas, leading to the degradation of agricultural and open spaces. This emphasizes the urgent need for effective urban planning activities to ensure the development of a sustainable city. While Das-Gaja seems unaffected, the decreasing trends in cropland and grassland, accompanied by the increasing built-up areas, are unsatisfactory. It is imperative for relevant authorities to be aware of these trends and implement proactive measures for sustainable urban development.Keywords: land use and land cover, geospatial, urbanization, geographic information system, sustainable urban development
Procedia PDF Downloads 60999 Emerging Barriers And Enablers Of Digital Inclusion For Students With Disabilities In Ethiopian Education
Authors: Merih Welay Welesilassie
Abstract:
This research investigated the factors influencing digital inclusion for young students with disabilities in Ethiopian schools. In this context, socio-economic, infrastructural, and cultural challenges amplify educational disparities. In the era of digital technology's pivotal role in education, it is crucial to ensure equitable access for students with disabilities. Nevertheless, obstacles like inadequate infrastructure, insufficient teacher training, and economic constraints impede the incorporation of digital tools in educational environments, especially for marginalised groups. This study employed an explanatory sequential mixed-methods approach involving data collection through a survey administered to 300 students. Subsequently, in-depth interviews were conducted with 30 participants to provide comprehensive insights into their experiences. The quantitative analysis uncovered that students with disabilities have limited support for digital readiness, find digital technologies less accessible, and perceive digital tools as less easy to use. The study revealed that economic barriers, such as the high cost of devices and limited internet access, prevent students from fully utilising digital resources. Furthermore, infrastructural challenges, such as unreliable electricity and poor internet connectivity, exacerbate the issue. The qualitative data provided a more profound understanding by emphasising social and attitudinal obstacles, including a lack of empathy from both peers and educators, exclusion from participatory digital tasks, and enduring negative stereotypes regarding disabilities. The research highlights the importance of implementing interventions to enhance digital accessibility for students with disabilities. Essential suggestions encompass refining teacher training programs to effectively facilitate inclusive education, improving digital infrastructure, and offering financial assistance to procure digital tools. Furthermore, implementing policy reforms and public awareness campaigns is crucial to cultivate a cultural shift and nurture a more inclusive societal atmosphere. This study yields valuable perspectives on the digital inclusion scenario in Ethiopia, laying the groundwork for prospective research endeavours to narrow the digital gap for students with disabilities.Keywords: digital inclussion, students with disabilities, ethiopian education, barries and access
Procedia PDF Downloads 20998 L2 Exposure Environment, Teaching Skills, and Beliefs about Learners’ Out-of-Class Learning: A Survey on Teachers of English as a Foreign Language
Authors: Susilo Susilo
Abstract:
In the process of foreign language acquisition, L2 exposure has been evidently assumed efficient for learners to help increase their proficiency. However, to get enough L2 exposure in the context of learning English as a foreign language is not as easy as that of the first language learning context. Therefore, beyond the classroom L2 exposure is helpful for EFL learners to achieve the language tasks. Alongside the rapid development of technology and media, English as a foreign language is virtually used in the social media of almost all regions, affecting the faces of Teaching English as a Foreign Language (TEFL). This different face of TEFL unavoidably intrigues teachers to treat their students differently in the classroom in order that they can put more effort in maximizing beyond-the-class learning to help improve their in-class achievements. The study aims to investigate: 1) EFL teachers’ teaching skills and beliefs about students’ out-of-class activities in different L2 exposure environments, and 2) the effect on EFL teachers’ teaching skills and beliefs about students’ out-of-class activities of different L2 exposure environments. This is a survey for 80 EFL teachers from Senior High Schools in three regions of two provinces in Indonesia. A questionnaire using a four-point Likert scale was distributed to the respondents to elicit data. The questionnaires were developed by reffering to the constructs of teaching skills (i.e. teaching preparation, teaching action, and teaching evaluation) and beliefs about out-of-class learning (i.e. setting, process and atmosphere), which have been taken from some expert definitions. The internal consistencies for those constructs were examined by using Cronbach Alpha. The data of the study were analyzed by using SPSS program, i.e. descriptive statistics and independent sample t-test. The standard for determining the significance was p < .05. The results revealed that: 1) teaching skills performed by the teachers of English as a foreign language in different exposure environments showed various focus of teaching skills, 2) the teachers showed various ways of beliefs about students’ out-of-class activities in different exposure environments, 3) there was a significant difference in the scores for NNESTs’ teaching skills in urban regions (M=34.5500, SD=4.24838) and those in rural schools (M=24.9500, SD=2.42794) conditions; t (78)=12.408, p = 0.000; and 4) there was a significant difference in the scores for NNESTs’ beliefs about students’ out-of-class activities in urban schools (M=36.9250, SD=6.17434) and those in rural regions (M=29.4250, SD=4.56793) conditions; t (78)=6.176, p = 0.000. These results suggest that different L2 exposure environments really do have effects on teachers’ teaching skills and beliefs about their students’ out-of-class learning.Keywords: belief about EFL out-of-class learning, L2 exposure environment, teachers of English as a foreign language, teaching skills
Procedia PDF Downloads 342997 Utilising Indigenous Knowledge to Design Dykes in Malawi
Authors: Martin Kleynhans, Margot Soler, Gavin Quibell
Abstract:
Malawi is one of the world’s poorest nations and consequently, the design of flood risk management infrastructure comes with a different set of challenges. There is a lack of good quality hydromet data, both in spatial terms and in the quality thereof and the challenge in the design of flood risk management infrastructure is compounded by the fact that maintenance is almost completely non-existent and that solutions have to be simple to be effective. Solutions should not require any further resources to remain functional after completion, and they should be resilient. They also have to be cost effective. The Lower Shire Valley of Malawi suffers from frequent flood events. Various flood risk management interventions have been designed across the valley during the course of the Shire River Basin Management Project – Phase I, and due to the data poor environment, indigenous knowledge was relied upon to a great extent for hydrological and hydraulic model calibration and verification. However, indigenous knowledge comes with the caveat that it is ‘fuzzy’ and that it can be manipulated for political reasons. The experience in the Lower Shire valley suggests that indigenous knowledge is unlikely to invent a problem where none exists, but that flood depths and extents may be exaggerated to secure prioritization of the intervention. Indigenous knowledge relies on the memory of a community and cannot foresee events that exceed past experience, that could occur differently to those that have occurred in the past, or where flood management interventions change the flow regime. This complicates communication of planned interventions to local inhabitants. Indigenous knowledge is, for the most part, intuitive, but flooding can sometimes be counter intuitive, and the rural poor may have a lower trust of technology. Due to a near complete lack of maintenance of infrastructure, infrastructure has to be designed with no moving parts and no requirement for energy inputs. This precludes pumps, valves, flap gates and sophisticated warning systems. Designs of dykes during this project included ‘flood warning spillways’, that double up as pedestrian and animal crossing points, which provide warning of impending dangerous water levels behind dykes to residents before water levels that could cause a possible dyke failure are reached. Locally available materials and erosion protection using vegetation were used wherever possible to keep costs down.Keywords: design of dykes in low-income countries, flood warning spillways, indigenous knowledge, Malawi
Procedia PDF Downloads 279996 Opening of North Sea Route and Geopolitics in Arctic: Impact and Possibilities of Route
Authors: Nikkey Keshri
Abstract:
Arctic is a polar region located at the north of the earth. This consists of the Arctic Ocean and other parts of Canada, Russia, the United States, Denmark, Norway, Sweden, Finland, and Iceland. Arctic has vast natural resources which are exploited with modern technology, and the economic opening up of Russia has given new opportunities. All these states have connected with the Arctic region for economic activities and this effect the region ecology. The pollution problem is a serious threat to the people health living around pollution sources. Due to the prevailing worldwide sea and air currents, the Arctic area is the fallout region for long-range transport pollutants, and in some places the concentrations exceed the levels of densely populated urban areas. The Arctic is especially vulnerable to the effects of global warming, as has become apparent in the melting sea ice in recent years. Climate models predict much greater warming in the Arctic than the global average, resulting in significant international attention to the region. The global warming has an adverse impact on the climate, indigenous people, wildlife, and infrastructure. However, there are several opportunities that have emerged in the form of shipping routes, resources, and new territories. The shipping route through the Arctic is a reality and is currently navigable for a few weeks during summers. There are large deposits of oil and gas, minerals and fish and the surrounding countries with Arctic coastlines are becoming quite assertive about exercising their sovereignty over the newfound wealth. The main part of the research is that how the opening of Northern Sea Route is providing opportunities or problem in the Arctic and it is becoming geopolitically important. It focuses on the interest Arctic and non Arctic states, their present and anticipated global geopolitical aims. The Northern Sea Route might open up due to climate changes and that Iceland might benefit or has an impact from the situation. Efforts will be made to answer the research question: ‘Whether Opening of North Sea Route is providing opportunities or becoming a risk for Arctic region?’ Every research has a structure which usually called design. In this research, both Qualitative and Quantitative method is used in terms of various literature, maps, pie- charts, etc to find out the answer for the research question. The aim of this research is to find out the impact of Opening of North Sea Route over Arctic region and how this make arctic geopolitically important. The aim behind this research is to find out the impact of climate change and how the particular geographical area is being affected.Keywords: climate change, geopolitics, international relation, Northern Sea Route
Procedia PDF Downloads 258995 Cross-Tier Collaboration between Preservice and Inservice Language Teachers in Designing Online Video-Based Pragmatic Assessment
Authors: Mei-Hui Liu
Abstract:
This paper reports the progression of language teachers’ learning to assess students’ speech act performance via online videos in a cross-tier professional growth community. This yearlong research project collected multiple data sources from several stakeholders, including 12 preservice and 4 inservice English as a foreign language (EFL) teachers, 4 English professionals, and 82 high school students. Data sources included surveys, (focus group) interviews, online reflection journals, online video-based assessment items/scores, and artifacts related to teacher professional learning. The major findings depicted the effectiveness of this proposed learning module on language teacher development in pragmatic assessment as well as its impact on student learning experience. All these teachers appreciated this professional learning experience which enhanced their knowledge in assessing students’ pragmalinguistic and sociopragmatic performance in an English speech act (i.e., making refusals). They learned how to design online video-based assessment items by attending to specific linguistic structures, semantic formula, and sociocultural issues. They further became aware of how to sharpen pragmatic instructional skills in the near future after putting theories into online assessment and related classroom practices. Additionally, data analysis revealed students’ achievement in and satisfaction with the designed online assessment. Yet, during the professional learning process most participating teachers encountered challenges in reaching a consensus on selecting appropriate video clips from available sources to present the sociocultural values in English-speaking refusal contexts. Also included was to construct test items which could testify the influence of interlanguage transfer on students’ pragmatic performance in various conversational scenarios. With pedagogical implications and research suggestions, this study adds to the increasing amount of research into integrating preservice and inservice EFL teacher education in pragmatic assessment and relevant instruction. Acknowledgment: This research project is sponsored by the Ministry of Science and Technology in the Republic of China under the grant number of MOST 106-2410-H-029-038.Keywords: cross-tier professional development, inservice EFL teachers, pragmatic assessment, preservice EFL teachers, student learning experience
Procedia PDF Downloads 259994 Conceptualizing Personalized Learning: Review of Literature 2007-2017
Authors: Ruthanne Tobin
Abstract:
As our data-driven, cloud-based, knowledge-centric lives become ever more global, mobile, and digital, educational systems everywhere are struggling to keep pace. Schools need to prepare students to become critical-thinking, tech-savvy, life-long learners who are engaged and adaptable enough to find their unique calling in a post-industrial world of work. Recognizing that no nation can afford poor achievement or high dropout rates without jeopardizing its social and economic future, the thirty-two nations of the OECD are launching initiatives to redesign schools, generally under the banner of Personalized Learning or 21st Century Learning. Their intention is to transform education by situating students as co-enquirers and co-contributors with their teachers of what, when, and how learning happens for each individual. In this focused review of the 2007-2017 literature on personalized learning, the author sought answers to two main questions: “What are the theoretical frameworks that guide personalized learning?” and “What is the conceptual understanding of the model?” Ultimately, the review reveals that, although the research area is overly theorized and under-substantiated, it does provide a significant body of knowledge about this potentially transformative educational restructuring. For example, it addresses the following questions: a) What components comprise a PL model? b) How are teachers facilitating agency (voice & choice) in their students? c) What kinds of systems, processes and procedures are being used to guide the innovation? d) How is learning organized, monitored and assessed? e) What role do inquiry based models play? f) How do teachers integrate the three types of knowledge: Content, pedagogical and technological? g) Which kinds of forces enable, and which impede, personalizing learning? h) What is the nature of the collaboration among teachers? i) How do teachers co-regulate differentiated tasks? One finding of the review shows that while technology can dramatically expand access to information, expectations of its impact on teaching and learning are often disappointing unless the technologies are paired with excellent pedagogies in order to address students’ needs, interests and aspirations. This literature review fills a significant gap in this emerging field of research, as it serves to increase conceptual clarity that has hampered both the theorizing and the classroom implementation of a personalized learning model.Keywords: curriculum change, educational innovation, personalized learning, school reform
Procedia PDF Downloads 223993 Integrating Cyber-Physical System toward Advance Intelligent Industry: Features, Requirements and Challenges
Authors: V. Reyes, P. Ferreira
Abstract:
In response to high levels of competitiveness, industrial systems have evolved to improve productivity. As a consequence, a rapid increase in volume production and simultaneously, a customization process require lower costs, more variety, and accurate quality of products. Reducing time-cycle production, enabling customizability, and ensure continuous quality improvement are key features in advance intelligent industry. In this scenario, customers and producers will be able to participate in the ongoing production life cycle through real-time interaction. To achieve this vision, transparency, predictability, and adaptability are key features that provide the industrial systems the capability to adapt to customer demands modifying the manufacturing process through an autonomous response and acting preventively to avoid errors. The industrial system incorporates a diversified number of components that in advanced industry are expected to be decentralized, end to end communicating, and with the capability to make own decisions through feedback. The evolving process towards advanced intelligent industry defines a set of stages to empower components of intelligence and enhancing efficiency to achieve the decision-making stage. The integrated system follows an industrial cyber-physical system (CPS) architecture whose real-time integration, based on a set of enabler technologies, links the physical and virtual world generating the digital twin (DT). This instance allows incorporating sensor data from real to virtual world and the required transparency for real-time monitoring and control, contributing to address important features of the advanced intelligent industry and simultaneously improve sustainability. Assuming the industrial CPS as the core technology toward the latest advanced intelligent industry stage, this paper reviews and highlights the correlation and contributions of the enabler technologies for the operationalization of each stage in the path toward advanced intelligent industry. From this research, a real-time integration architecture for a cyber-physical system with applications to collaborative robotics is proposed. The required functionalities and issues to endow the industrial system of adaptability are identified.Keywords: cyber-physical systems, digital twin, sensor data, system integration, virtual model
Procedia PDF Downloads 118992 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.
Procedia PDF Downloads 89991 A Triad Pedagogy for Increased Digital Competence of Human Resource Management Students: Reflecting on Human Resource Information Systems at a South African University
Authors: Esther Pearl Palmer
Abstract:
Driven by the increased pressure on Higher Education Institutions (HEIs) to produce work-ready graduates for the modern world of work, this study reflects on triad teaching and learning practices to increase student engagement and employability. In the South African higher education context, the employability of graduates is imperative in strengthening the country’s economy and in increasing competitiveness. Within this context, the field of Human Resource Management (HRM) calls for innovative methods and approaches to teaching and learning and assessing the skills and competencies of graduates to render them employable. Digital competency in Human Resource Information Systems (HRIS) is an important component and prerequisite for employment in HRM. The purpose of this research is to reflect on the subject HRIS developed by lecturers at the Central University of Technology, Free State (CUT), with the intention to actively engage students in real-world learning activities and increase their employability. The Enrichment Triad Model (ETM) was used as theoretical framework to develop the subject as it supports a triad teaching and learning approach to education. It is, furthermore, an inter-structured model that supports collaboration between industry, academics and students. The study follows a mixed-method approach to reflect on the learning experiences of the industry, academics and students in the subject field over the past three years. This paper is a work in progress and seeks to broaden the scope of extant studies about student engagement in work-related learning to increase employability. Based on the ETM as theoretical framework and pedagogical practice, this paper proposes that following a triad teaching and learning approach will increase work-related skills of students. Findings from the study show that students, academics and industry alike regard educational opportunities that incorporate active learning experiences with the world of work enhances student engagement in learning and renders them more employable.Keywords: digital competence, enriched triad model, human resource information systems, student engagement, triad pedagogy.
Procedia PDF Downloads 92990 Evaluation of Teaching Team Stress Factors in Two Engineering Education Programs
Authors: Kari Bjorn
Abstract:
Team learning has been studied and modeled as double loop model and its variations. Also, metacognition has been suggested as a concept to describe the nature of team learning to be more than a simple sum of individual learning of the team members. Team learning has a positive correlation with both individual motivation of its members, as well as the collective factors within the team. Team learning of previously very independent members of two teaching teams is analyzed. Applied Science Universities are training future professionals with ever more diversified and multidisciplinary skills. The size of the units of teaching and learning are increasingly larger for several reasons. First, multi-disciplinary skill development requires more active learning and richer learning environments and learning experiences. This occurs on students teams. Secondly, teaching of multidisciplinary skills requires a multidisciplinary and team-based teaching from the teachers as well. Team formation phases have been identifies and widely accepted. Team role stress has been analyzed in project teams. Projects typically have a well-defined goal and organization. This paper explores team stress of two teacher teams in a parallel running two course units in engineering education. The first is an Industrial Automation Technology and the second is Development of Medical Devices. The courses have a separate student group, and they are in different campuses. Both are run in parallel within 8 week time. Both of them are taught by a group of four teachers with several years of teaching experience, but individually. The team role stress scale items - the survey is done to both teaching groups at the beginning of the course and at the end of the course. The inventory of questions covers the factors of ambiguity, conflict, quantitative role overload and qualitative role overload. Some comparison to the study on project teams can be drawn. Team development stage of the two teaching groups is different. Relating the team role stress factors to the development stage of the group can reveal the potential of management actions to promote team building and to understand the maturity of functional and well-established teams. Mature teams indicate higher job satisfaction and deliver higher performance. Especially, teaching teams who deliver highly intangible results of learning outcome are sensitive to issues in the job satisfaction and team conflicts. Because team teaching is increasing, the paper provides a review of the relevant theories and initial comparative and longitudinal results of the team role stress factors applied to teaching teams.Keywords: engineering education, stress, team role, team teaching
Procedia PDF Downloads 225989 Physical Education Effect on Sports Science Analysis Technology
Authors: Peter Adly Hamdy Fahmy
Abstract:
The aim of the study was to examine the effects of a physical education program on student learning by combining the teaching of personal and social responsibility (TPSR) with a physical education model and TPSR with a traditional teaching model, these learning outcomes involving self-learning. -Study. Athletic performance, enthusiasm for sport, group cohesion, sense of responsibility and game performance. The participants were 3 secondary school physical education teachers and 6 physical education classes, 133 participants with students from the experimental group with 75 students and the control group with 58 students, and each teacher taught the experimental group and the control group for 16 weeks. The research methods used surveys, interviews and focus group meetings. Research instruments included the Personal and Social Responsibility Questionnaire, Sports Enthusiasm Scale, Group Cohesion Scale, Sports Self-Efficacy Scale, and Game Performance Assessment Tool. Multivariate analyzes of covariance and repeated measures ANOVA were used to examine differences in student learning outcomes between combining the TPSR with a physical education model and the TPSR with a traditional teaching model. The research findings are as follows: 1) The TPSR sports education model can improve students' learning outcomes, including sports self-efficacy, game performance, sports enthusiasm, team cohesion, group awareness and responsibility. 2) A traditional teaching model with TPSR could improve student learning outcomes, including sports self-efficacy, responsibility, and game performance. 3) The sports education model with TPSR could improve learning outcomes more than the traditional teaching model with TPSR, including sports self-efficacy, sports enthusiasm, responsibility and game performance. 4) Based on qualitative data on teachers' and students' learning experience, the physical education model with TPSR significantly improves learning motivation, group interaction and sense of play. The results suggest that physical education with TPSR could further improve learning outcomes in the physical education program. On the other hand, the hybrid model curriculum projects TPSR - Physical Education and TPSR - Traditional Education are good curriculum projects for moral character education that can be used in school physics.Keywords: approach competencies, physical, education, teachers employment, graduate, physical education and sport sciences, SWOT analysis character education, sport season, game performance, sport competence
Procedia PDF Downloads 46988 Doing Durable Organisational Identity Work in the Transforming World of Work: Meeting the Challenge of Different Workplace Strategies
Authors: Theo Heyns Veldsman, Dieter Veldsman
Abstract:
Organisational Identity (OI) refers to who and what the organisation is, what it stands for and does, and what it aspires to become. OI explores the perspectives of how we see ourselves, are seen by others and aspire to be seen. It provides as rationale the ‘why’ for the organisation’s continued existence. The most widely accepted differentiating features of OI are encapsulated in the organisation’s core, distinctive, differentiating, and enduring attributes. OI finds its concrete expression in the organisation’s Purpose, Vision, Strategy, Core Ideology, and Legacy. In the emerging new order infused by hyper-turbulence and hyper-fluidity, the VICCAS world, OI provides a secure anchor and steady reference point for the organisation, particularly the growing widespread focus on Purpose, which is indicative of the organisation’s sense of social citizenship. However, the transforming world of work (TWOW) - particularly the potent mix of ongoing disruptive innovation, the 4th Industrial Revolution, and the gig economy with the totally unpredicted COVID19 pandemic - has resulted in the consequential adoption of different workplace strategies by organisations in terms of how, where, and when work takes place. Different employment relations (transient to permanent); work locations (on-site to remote); work time arrangements (full-time at work to flexible work schedules); and technology enablement (face-to-face to virtual) now form the basis of the employer/employee relationship. The different workplace strategies, fueled by the demands of TWOW, pose a substantive challenge to organisations of doing durable OI work, able to fulfill OI’s critical attributes of core, distinctive, differentiating, and enduring. OI work is contained in the ongoing, reciprocally interdependent stages of sense-breaking, sense-giving, internalisation, enactment, and affirmation. The objective of our paper is to explore how to do durable OI work relative to different workplace strategies in the TWOW. Using a conceptual-theoretical approach from a practice-based orientation, the paper addresses the following topics: distinguishes different workplace strategies based upon a time/place continuum; explicates stage-wise the differential organisational content and process consequences of these strategies for durable OI work; indicates the critical success factors of durable OI work under these differential conditions; recommends guidelines for OI work relative to TWOW; and points out ethical implications of all of the above.Keywords: organisational identity, workplace strategies, new world of work, durable organisational identity work
Procedia PDF Downloads 200987 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms
Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson
Abstract:
This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection
Procedia PDF Downloads 464986 An Interactive User-Oriented Approach to Optimizing Public Space Lighting
Authors: Tamar Trop, Boris Portnov
Abstract:
Public Space Lighting (PSL) of outdoor urban areas promotes comfort, defines spaces and neighborhood identities, enhances perceived safety and security, and contributes to residential satisfaction and wellbeing. However, if excessive or misdirected, PSL leads to unnecessary energy waste and increased greenhouse gas emissions, poses a non-negligible threat to the nocturnal environment, and may become a potential health hazard. At present, PSL is designed according to international, regional, and national standards, which consolidate best practice. Yet, knowledge regarding the optimal light characteristics needed for creating a perception of personal comfort and safety in densely populated residential areas, and the factors associated with this perception, is still scarce. The presented study suggests a paradigm shift in designing PSL towards a user-centered approach, which incorporates pedestrians' perspectives into the process. The study is an ongoing joint research project between China and Israel Ministries of Science and Technology. Its main objectives are to reveal inhabitants' perceptions of and preferences for PSL in different densely populated neighborhoods in China and Israel, and to develop a model that links instrumentally measured parameters of PSL (e.g., intensity, spectra and glare) with its perceived comfort and quality, while controlling for three groups of attributes: locational, temporal, and individual. To investigate measured and perceived PSL, the study employed various research methods and data collection tools, developed a location-based mobile application, and used multiple data sources, such as satellite multi-spectral night-time light imagery, census statistics, and detailed planning schemes. One of the study’s preliminary findings is that higher sense of safety in the investigated neighborhoods is not associated with higher levels of light intensity. This implies potential for energy saving in brightly illuminated residential areas. Study findings might contribute to the design of a smart and adaptive PSL strategy that enhances pedestrians’ perceived safety and comfort while reducing light pollution and energy consumption.Keywords: energy efficiency, light pollution, public space lighting, PSL, safety perceptions
Procedia PDF Downloads 134985 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators
Authors: Guenther Schuh, Michael Riesener, Frederic Diels
Abstract:
Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.Keywords: agile, highly iterative development, agile-indicator, product development
Procedia PDF Downloads 246984 Visualization of Chinese Genealogies with Digital Technology: A Case of Genealogy of Wu Clan in the Village of Gaoqian
Authors: Huiling Feng, Jihong Liang, Xiaodong Gong, Yongjun Xu
Abstract:
Recording history is a tradition in ancient China. A record of a dynasty makes a dynastic history; a record of a locality makes a chorography, and a record of a clan makes a genealogy – the three combined together depicts a complete national history of China both macroscopically and microscopically, with genealogy serving as the foundation. Genealogy in ancient China traces back to a family tree or pedigrees in the early and medieval historical times. After Song Dynasty, the civilian society gradually emerged, and the Emperor had to allow people from the same clan to live together and hold the ancestor worship activities, thence compilation of genealogy became popular in the society. Since then, genealogies, regarded as important as ancestor and religious temples in a traditional villages even today, have played a primary role in identification of a clan and maintain local social order. Chinese genealogies are rich in their documentary materials. Take the Genealogy of Wu Clan in Gaoqian as an example. Gaoqian is a small village in Xianju County of Zhejiang Province. The Genealogy of Wu Clan in Gaoqian is composed of a whole set of materials from Foreword to Family Trees, Family Rules, Family Rituals, Family Graces and Glories, Ode to An ancestor’s Portrait, Manual for the Ancestor Temple, documents for great men in the clan, works written by learned men in the clan, the contracts concerning landed property, even notes on tombs and so on. Literally speaking, the genealogy, with detailed information from every aspect recorded in stylistic rules, is indeed the carrier of the entire culture of a clan. However, due to their scarcity in number and difficulties in reading, genealogies seldom fall into the horizons of common people. This paper, focusing on the case of the Genealogy of Wu Clan in the Village of Gaoqian, intends to reproduce a digital Genealogy by use of ICTs, through an in-depth interpretation of the literature and field investigation in Gaoqian Village. Based on this, the paper goes further to explore the general methods in transferring physical genealogies to digital ones and ways in visualizing the clanism culture embedded in the genealogies with a combination of digital technologies such as software in family trees, multimedia narratives, animation design, GIS application and e-book creators.Keywords: clanism culture, multimedia narratives, genealogy of Wu Clan, GIS
Procedia PDF Downloads 222983 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap
Authors: Nikolai N. Bogolubov, Andrey V. Soldatov
Abstract:
Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom
Procedia PDF Downloads 271982 “I” on the Web: Social Penetration Theory Revised
Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology
Abstract:
The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information
Procedia PDF Downloads 372981 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 144980 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 63979 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data
Authors: Minjuan Sun
Abstract:
Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.Keywords: credit score, digital footprint, Fintech, machine learning
Procedia PDF Downloads 162978 Impact of Pedagogical Techniques on the Teaching of Sports Sciences
Authors: Muhammad Saleem
Abstract:
Background: The teaching of sports sciences encompasses a broad spectrum of disciplines, including biomechanics, physiology, psychology, and coaching. Effective pedagogical techniques are crucial in imparting both theoretical knowledge and practical skills necessary for students to excel in the field. The impact of these techniques on students’ learning outcomes, engagement, and professional preparedness remains a vital area of study. Objective: This study aims to evaluate the effectiveness of various pedagogical techniques used in the teaching of sports sciences. It seeks to identify which methods most significantly enhance student learning, retention, engagement, and practical application of knowledge. Methods: A mixed-methods approach was employed, including both quantitative and qualitative analyses. The study involved a comparative analysis of traditional lecture-based teaching, experiential learning, problem-based learning (PBL), and technology-enhanced learning (TEL). Data were collected through surveys, interviews, and academic performance assessments from students enrolled in sports sciences programs at multiple universities. Statistical analysis was used to evaluate academic performance, while thematic analysis was applied to qualitative data to capture student experiences and perceptions. Results: The findings indicate that experiential learning and PBL significantly improve students' understanding and retention of complex sports science concepts compared to traditional lectures. TEL was found to enhance engagement and provide students with flexible learning opportunities, but its impact on deep learning varied depending on the quality of the digital resources. Overall, a combination of experiential learning, PBL, and TEL was identified as the most effective pedagogical approach, leading to higher student satisfaction and better preparedness for real-world applications. Conclusion: The study underscores the importance of adopting diverse and student-centered pedagogical techniques in the teaching of sports sciences. While traditional lectures remain useful for foundational knowledge, integrating experiential learning, PBL, and TEL can substantially improve student outcomes. These findings suggest that educators should consider a blended approach to pedagogy to maximize the effectiveness of sports science education.Keywords: sport sciences, pedagogical techniques, health and physical education, problem-based learning, student engagement
Procedia PDF Downloads 26977 Evaluation of the Physico-Chemical and Microbial Properties of the Compost Leachate (CL) to Assess Its Role in the Bioremediation of Polyaromatic Hydrocarbons (PAHs)
Authors: Omaima A. Sharaf, Tarek A. Moussa, Said M. Badr El-Din, H. Moawad
Abstract:
Background: Polycyclic aromatic hydrocarbons (PAHs) pose great environmental and human health concerns for their widespread occurrence, persistence, and carcinogenic properties. PAHs releases due to anthropogenic activities to the wider environment have led to higher concentrations of these contaminants than would be expected from natural processes alone. This may result in a wide range of environmental problems that can accumulate in agricultural ecosystems, which threatened to become a negative impact on sustainable agricultural development. Thus, this study aimed to evaluate the physico-chemical, and microbial properties of the compost leachate (CL) to assess its role as nutrient and microbial source (biostimulation/bioaugmentation) for developing a cost-effective bioremediation technology for PAHs contaminated sites. Material and Methods: PAHs-degrading bacteria were isolated from CL that was collected from a composting site located in central Scotland, UK. Isolation was carried out by enrichment using phenanthrene (PHR), pyrene (PYR) and benzo(a)pyrene (BaP) as the sole source of carbon and energy. The isolates were characterized using a variety of phenotypic and molecular properties. Six different isolates were identified based on the difference in morphological and biochemical tests. The efficiency of these isolates in PAHs utilization was assessed. Further analysis was performed to define taxonomical status and phylogenic relation between the most potent PAHs-utilizing bacterial strains and other standard strains, using molecular approach by partial 16S rDNA gene sequence analysis. Results indicated that the 16S rDNA sequence analysis confirmed the results of biochemical identification, as both of biochemical and molecular identification of the isolates assigned them to Bacillus licheniformis, Pseudomonas aeruginosa, Alcaligenes faecalis, Serratia marcescens, Enterobacter cloacae and Providenicia which were identified as the prominent PAHs-utilizers isolated from CL. Conclusion: This study indicates that the CL samples contain a diverse population of PAHs-degrading bacteria and the use of CL may have a potential for bioremediation of PAHs contaminated sites.Keywords: polycyclic aromatic hydrocarbons, physico-chemical analyses, compost leachate, microbial and biochemical analyses, phylogenic relations, 16S rDNA sequence analysis
Procedia PDF Downloads 263976 What Are the Problems in the Case of Analysis of Selenium by Inductively Coupled Plasma Mass Spectrometry in Food and Food Raw Materials?
Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Dávid Andrási
Abstract:
For analysis of elements in different food, feed and food raw material samples generally a flame atomic absorption spectrometer (FAAS), a graphite furnace atomic absorption spectrometer (GF-AAS), an inductively coupled plasma optical emission spectrometer (ICP-OES) and an inductively coupled plasma mass spectrometer (ICP-MS) are applied. All the analytical instruments have different physical and chemical interfering effects analysing food and food raw material samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays, it is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium (arsenic, germanium, vanadium, and chromium). To elaborate an analytical method for selenium with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) isobaric elemental, 2) isobaric molecular, and 3) physical interferences. Analysing food and food raw material samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food, feed and food raw material samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of selenium. So finally we could find “opportunities” to decrease the error of selenium analysis. To analyse selenium in food, feed and food raw material samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of Se, which can be corrected using internal standard (arsenic or tellurium).Keywords: selenium, ICP-MS, food, food raw material
Procedia PDF Downloads 508975 Ultrasonic Micro Injection Molding: Manufacturing of Micro Plates of Biomaterials
Authors: Ariadna Manresa, Ines Ferrer
Abstract:
Introduction: Ultrasonic moulding process (USM) is a recent injection technology used to manufacture micro components. It is able to melt small amounts of material so the waste of material is certainly reduced comparing to microinjection molding. This is an important advantage when the materials are expensive like medical biopolymers. Micro-scaled components are involved in a variety of uses, such as biomedical applications. It is required replication fidelity so it is important to stabilize the process and minimize the variability of the responses. The aim of this research is to investigate the influence of the main process parameters on the filling behaviour, the dimensional accuracy and the cavity pressure when a micro-plate is manufactured by biomaterials such as PLA and PCL. Methodology or Experimental Procedure: The specimens are manufactured using a Sonorus 1G Ultrasound Micro Molding Machine. The used geometry is a rectangular micro-plate of 15x5mm and 1mm of thickness. The materials used for the investigation are PLA and PCL due to biocompatible and degradation properties. The experimentation is divided into two phases. Firstly, the influence of process parameters (vibration amplitude, sonotrodo velocity, ultrasound time and compaction force) on filling behavior is analysed, in Phase 1. Next, when filling cavity is assured, the influence of both cooling time and force compaction on the cavity pressure, part temperature and dimensional accuracy is instigated, which is done in Phase. Results and Discussion: Filling behavior depends on sonotrodo velocity and vibration amplitude. When the ultrasonic time is higher, more ultrasonic energy is applied and the polymer temperature increases. Depending on the cooling time, it is possible that when mold is opened, the micro-plate temperature is too warm. Consequently, the polymer relieve its stored internal energy (ultrasonic and thermal) expanding through the easier direction. This fact is reflected on dimensional accuracy, causing micro-plates thicker than the mold. It has also been observed the most important fact that affects cavity pressure is the compaction configuration during the manufacturing cycle. Conclusions: This research demonstrated the influence of process parameters on the final micro-plated manufactured. Future works will be focused in manufacturing other geometries and analysing the mechanical properties of the specimens.Keywords: biomaterial, biopolymer, micro injection molding, ultrasound
Procedia PDF Downloads 284974 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study
Authors: D. M. Samartsev, A. G. Copping
Abstract:
As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.Keywords: analysis, architecture, automation, design process, technology
Procedia PDF Downloads 104973 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving
Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco
Abstract:
Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.Keywords: augmented reality, driving, physiological signals, test platform
Procedia PDF Downloads 142