Search results for: prostate specific antigen value
36 Integrating Experiential Real-World Learning in Undergraduate Degrees: Maximizing Benefits and Overcoming Challenges
Authors: Anne E. Goodenough
Abstract:
One of the most important roles of higher education professionals is to ensure that graduates have excellent employment prospects. This means providing students with the skills necessary to be immediately effective in the workplace. Increasingly, universities are seeking to achieve this by moving from lecture-based and campus-delivered curricula to more varied delivery, which takes students out of their academic comfort zone and allows them to engage with, and be challenged by, real world issues. One popular approach is integration of problem-based learning (PBL) projects into curricula. However, although the potential benefits of PBL are considerable, it can be difficult to devise projects that are meaningful, such that they can be regarded as mere ‘hoop jumping’ exercises. This study examines three-way partnerships between academics, students, and external link organizations. It studied the experiences of all partners involved in different collaborative projects to identify how benefits can be maximized and challenges overcome. Focal collaborations included: (1) development of real-world modules with novel assessment whereby the organization became the ‘client’ for student consultancy work; (2) frameworks where students collected/analyzed data for link organizations in research methods modules; (3) placement-based internships and dissertations; (4) immersive fieldwork projects in novel locations; and (5) students working as partners on staff-led research with link organizations. Focus groups, questionnaires and semi-structured interviews were used to identify opportunities and barriers, while quantitative analysis of students’ grades was used to determine academic effectiveness. Common challenges identified by academics were finding suitable link organizations and devising projects that simultaneously provided education opportunities and tangible benefits. There was no ‘one size fits all’ formula for success, but careful planning and ensuring clarity of roles/responsibilities were vital. Students were very positive about collaboration projects. They identified benefits to confidence, time-keeping and communication, as well as conveying their enthusiasm when their work was of benefit to the wider community. They frequently highlighted employability opportunities that collaborative projects opened up and analysis of grades demonstrated the potential for such projects to increase attainment. Organizations generally recognized the value of project outputs, but often required considerable assistance to put the right scaffolding in place to ensure projects worked. Benefits were maximized by ensuring projects were well-designed, innovative, and challenging. Co-publication of projects in peer-reviewed journals sometimes gave additional benefits for all involved, being especially beneficial for student curriculum vitae. PBL and student projects are by no means new pedagogic approaches: the novelty here came from creating meaningful three-way partnerships between academics, students, and link organizations at all undergraduate levels. Such collaborations can allow students to make a genuine contribution to knowledge, answer real questions, solve actual problems, all while providing tangible benefits to organizations. Because projects are actually needed, students tend to engage with learning at a deep level. This enhances student experience, increases attainment, encourages development of subject-specific and transferable skills, and promotes networking opportunities. Such projects frequently rely upon students and staff working collaboratively, thereby also acting to break down the traditional teacher/learner division that is typically unhelpful in developing students as advanced learners.Keywords: higher education, employability, link organizations, innovative teaching and learning methods, interactions between enterprise and education, student experience
Procedia PDF Downloads 18335 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 4134 Analysis of Capillarity Phenomenon Models in Primary and Secondary Education in Spain: A Case Study on the Design, Implementation, and Analysis of an Inquiry-Based Teaching Sequence
Authors: E. Cascarosa-Salillas, J. Pozuelo-Muñoz, C. Rodríguez-Casals, A. de Echave
Abstract:
This study focuses on improving the understanding of the capillarity phenomenon among Primary and Secondary Education students. Despite being a common concept in daily life and covered in various subjects, students’ comprehension remains limited. This work explores inquiry-based teaching methods to build a conceptual foundation of capillarity by examining the forces involved. The study adopts an inquiry-based teaching approach supported by research emphasizing the importance of modeling in science education. Scientific modeling aids students in applying knowledge across varied contexts and developing systemic thinking, allowing them to construct scientific models applicable to everyday situations. This methodology fosters the development of scientific competencies such as observation, hypothesis formulation, and communication. The research was structured as a case study with activities designed for Spanish Primary and Secondary Education students aged 9 to 13. The process included curriculum analysis, the design of an activity sequence, and its implementation in classrooms. Implementation began with questions that students needed to resolve using available materials, encouraging observation, experimentation, and the re-contextualization of activities to everyday phenomena where capillarity is observed. Data collection tools included audio and video recordings of the sessions, which were transcribed and analyzed alongside the students' written work. Students' drawings on capillarity were also collected and categorized. Qualitative analyses of the activities showed that, through inquiry, students managed to construct various models of capillarity, reflecting an improved understanding of the phenomenon. Initial activities allowed students to express prior ideas and formulate hypotheses, which were then refined and expanded in subsequent sessions. The generalization and use of graphical representations of their ideas on capillarity, analyzed alongside their written work, enabled the categorization of capillarity models: Intuitive Model: A visual and straightforward representation without explanations of how or why it occurs. Simple symbolic elements, such as arrows to indicate water rising, are used without detailed or causal understanding. It reflects an initial, immediate perception of the phenomenon, interpreted as something that happens "on its own" without delving into the microscopic level. Explanatory Intuitive Model: Students begin to incorporate causal explanations, though still limited and without complete scientific accuracy. They represent the role of materials and use basic terms such as ‘absorption’ or ‘attraction’ to describe the rise of water. This model shows a more complex understanding where the phenomenon is not only observed but also partially explained in terms of interaction, though without microscopic detail. School Scientific Model: This model reflects a more advanced and detailed understanding. Students represent the phenomenon using specific scientific concepts like ‘surface tension,’ cohesion,’ and ‘adhesion,’ including structured explanations connecting microscopic and macroscopic levels. At this level, students model the phenomenon as a coherent system, demonstrating how various forces or properties interact in the capillarity process, with representations on a microscopic level. The study demonstrated that the capillarity phenomenon can be effectively approached in class through the experimental observation of everyday phenomena, explained through guided inquiry learning. The methodology facilitated students’ construction of capillarity models and served to analyze an interaction phenomenon of different forces occurring at the microscopic level.Keywords: capillarity, inquiry-based learning, scientific modeling, primary and secondary education, conceptual understanding, Drawing analysis.
Procedia PDF Downloads 1333 Synthetic Method of Contextual Knowledge Extraction
Authors: Olga Kononova, Sergey Lyapin
Abstract:
Global information society requirements are transparency and reliability of data, as well as ability to manage information resources independently; particularly to search, to analyze, to evaluate information, thereby obtaining new expertise. Moreover, it is satisfying the society information needs that increases the efficiency of the enterprise management and public administration. The study of structurally organized thematic and semantic contexts of different types, automatically extracted from unstructured data, is one of the important tasks for the application of information technologies in education, science, culture, governance and business. The objectives of this study are the contextual knowledge typologization, selection or creation of effective tools for extracting and analyzing contextual knowledge. Explication of various kinds and forms of the contextual knowledge involves the development and use full-text search information systems. For the implementation purposes, the authors use an e-library 'Humanitariana' services such as the contextual search, different types of queries (paragraph-oriented query, frequency-ranked query), automatic extraction of knowledge from the scientific texts. The multifunctional e-library «Humanitariana» is realized in the Internet-architecture in WWS-configuration (Web-browser / Web-server / SQL-server). Advantage of use 'Humanitariana' is in the possibility of combining the resources of several organizations. Scholars and research groups may work in a local network mode and in distributed IT environments with ability to appeal to resources of any participating organizations servers. Paper discusses some specific cases of the contextual knowledge explication with the use of the e-library services and focuses on possibilities of new types of the contextual knowledge. Experimental research base are science texts about 'e-government' and 'computer games'. An analysis of the subject-themed texts trends allowed to propose the content analysis methodology, that combines a full-text search with automatic construction of 'terminogramma' and expert analysis of the selected contexts. 'Terminogramma' is made out as a table that contains a column with a frequency-ranked list of words (nouns), as well as columns with an indication of the absolute frequency (number) and the relative frequency of occurrence of the word (in %% ppm). The analysis of 'e-government' materials showed, that the state takes a dominant position in the processes of the electronic interaction between the authorities and society in modern Russia. The media credited the main role in these processes to the government, which provided public services through specialized portals. Factor analysis revealed two factors statistically describing the used terms: human interaction (the user) and the state (government, processes organizer); interaction management (public officer, processes performer) and technology (infrastructure). Isolation of these factors will lead to changes in the model of electronic interaction between government and society. In this study, the dominant social problems and the prevalence of different categories of subjects of computer gaming in science papers from 2005 to 2015 were identified. Therefore, there is an evident identification of several types of contextual knowledge: micro context; macro context; dynamic context; thematic collection of queries (interactive contextual knowledge expanding a composition of e-library information resources); multimodal context (functional integration of iconographic and full-text resources through hybrid quasi-semantic algorithm of search). Further studies can be pursued both in terms of expanding the resource base on which they are held, and in terms of the development of appropriate tools.Keywords: contextual knowledge, contextual search, e-library services, frequency-ranked query, paragraph-oriented query, technologies of the contextual knowledge extraction
Procedia PDF Downloads 35932 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales
Authors: Philipp Sommer, Amgad Agoub
Abstract:
The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning
Procedia PDF Downloads 5731 Experiences and Perceptions of the Barriers and Facilitators of Continence Care Provision in Residential and Nursing Homes for Older Adults: A Systematic Evidence Synthesis and Qualitative Exploration
Authors: Jennifer Wheeldon, Nick de Viggiani, Nikki Cotterill
Abstract:
Background: Urinary and fecal incontinence affect a significant proportion of older adults aged 65 and over who permanently reside in residential and nursing home facilities. Incontinence symptoms have been linked to comorbidities, an increased risk of infection and reduced quality of life and mental wellbeing of residents. However, continence care provision can often be poor, further compromising the health and wellbeing of this vulnerable population. Objectives: To identify experiences and perceptions of continence care provision in older adult residential care settings and to identify factors that help or hinder good continence care provision. Settings included both residential care homes and nursing homes for older adults. Methods: A qualitative evidence synthesis using systematic review methodology established the current evidence-base. Data from 20 qualitative and mixed-method studies was appraised and synthesized. Following the review process, 10* qualitative interviews with staff working in older adult residential care settings were conducted across six* sites, which included registered managers, registered nurses and nursing/care assistants/aides. Purposive sampling recruited individuals from across England. Both evidence synthesis and interview data was analyzed thematically, both manually and with NVivo software. Results: The evidence synthesis revealed complex barriers and facilitators for continence care provision at three influencing levels: macro (structural and societal external influences), meso (organizational and institutional influences) and micro (day-to-day actions of individuals impacting service delivery). Macro-level barriers included negative stigmas relating to incontinence, aging and working in the older adult social care sector, restriction of continence care resources such as containment products (i.e. pads), short staffing in care facilities, shortfalls in the professional education and training of care home staff and the complex health and social care needs of older adult residents. Meso-level barriers included task-centered organizational cultures, ageist institutional perspectives regarding old age and incontinence symptoms, inadequate care home management and poor communication and teamwork among care staff. Micro-level barriers included poor knowledge and negative attitudes of care home staff and residents regarding incontinence symptoms and symptom management and treatment. Facilitators at the micro-level included proactive and inclusive leadership skills of individuals in management roles. Conclusions: The findings of the evidence synthesis study help to outline the complexities of continence care provision in older adult care homes facilities. Macro, meso and micro level influences demonstrate problematic and interrelated barriers across international contexts, indicating that improving continence care in this setting is extremely challenging due to the multiple levels at which care provision and services are impacted. Both international and national older adult social care policy-makers, researchers and service providers must recognize this complexity, and any intervention seeking to improve continence care in older adult care home settings must be planned accordingly and appreciatively of the complex and interrelated influences. It is anticipated that the findings of the qualitative interviews will shed further light on the national context of continence care provision specific to England; data collection is ongoing*. * Sample size is envisaged to be between 20-30 participants from multiple sites by Spring 2023.Keywords: continence care, residential and nursing homes, evidence synthesis, qualitative
Procedia PDF Downloads 8630 Designing and Simulation of the Rotor and Hub of the Unmanned Helicopter
Authors: Zbigniew Czyz, Ksenia Siadkowska, Krzysztof Skiba, Karol Scislowski
Abstract:
Today’s progress in the rotorcraft is mostly associated with an optimization of aircraft performance achieved by active and passive modifications of main rotor assemblies and a tail propeller. The key task is to improve their performance, improve the hover quality factor for rotors but not change in specific fuel consumption. One of the tasks to improve the helicopter is an active optimization of the main rotor providing for flight stages, i.e., an ascend, flight, a descend. An active interference with the airflow around the rotor blade section can significantly change characteristics of the aerodynamic airfoil. The efficiency of actuator systems modifying aerodynamic coefficients in the current solutions is relatively high and significantly affects the increase in strength. The solution to actively change aerodynamic characteristics assumes a periodic change of geometric features of blades depending on flight stages. Changing geometric parameters of blade warping enables an optimization of main rotor performance depending on helicopter flight stages. Structurally, an adaptation of shape memory alloys does not significantly affect rotor blade fatigue strength, which contributes to reduce costs associated with an adaptation of the system to the existing blades, and gains from a better performance can easily amortize such a modification and improve profitability of such a structure. In order to obtain quantitative and qualitative data to solve this research problem, a number of numerical analyses have been necessary. The main problem is a selection of design parameters of the main rotor and a preliminary optimization of its performance to improve the hover quality factor for rotors. This design concept assumes a three-bladed main rotor with a chord of 0.07 m and radius R = 1 m. The value of rotor speed is a calculated parameter of an optimization function. To specify the initial distribution of geometric warping, a special software has been created that uses a numerical method of a blade element which respects dynamic design features such as fluctuations of a blade in its joints. A number of performance analyses as a function of rotor speed, forward speed, and altitude have been performed. The calculations were carried out for the full model assembly. This approach makes it possible to observe the behavior of components and their mutual interaction resulting from the forces. The key element of each rotor is the shaft, hub and pins holding the joints and blade yokes. These components are exposed to the highest loads. As a result of the analysis, the safety factor was determined at the level of k > 1.5, which gives grounds to obtain certification for the strength of the structure. The construction of the joint rotor has numerous moving elements in its structure. Despite the high safety factor, the places with the highest stresses, where the signs of wear and tear may appear, have been indicated. The numerical analysis carried out showed that the most loaded element is the pin connecting the modular bearing of the blade yoke with the element of the horizontal oscillation joint. The stresses in this element result in a safety factor of k=1.7. The other analysed rotor components have a safety factor of more than 2 and in the case of the shaft, this factor is more than 3. However, it must be remembered that the structure is as strong as the weakest cell is. Designed rotor for unmanned aerial vehicles adapted to work with blades with intelligent materials in its structure meets the requirements for certification testing. Acknowledgement: This work has been financed by the Polish National Centre for Research and Development under the LIDER program, Grant Agreement No. LIDER/45/0177/L-9/17/NCBR/2018.Keywords: main rotor, rotorcraft aerodynamics, shape memory alloy, materials, unmanned helicopter
Procedia PDF Downloads 15829 Source of Professionalism and Knowledge among Sport Industry Professionals in India with Limited Sport Management Higher Education
Authors: Sandhya Manjunath
Abstract:
The World Association for Sport Management (WASM) was established in 2012, and its mission is "to facilitate sport management research, teaching, and learning excellence and professional practice worldwide". As the field of sport management evolves, it have seen increasing globalization of not only the sport product but many educators have also internationalized courses and curriculums. Curricula should reflect globally recognized issues and disseminate specific intercultural knowledge, skills, and practices, but regional disparities still exist. For example, while India has some of the most ardent sports fans and events in the world, sport management education programs and the development of a proper curriculum in India are still in their nascent stages, especially in comparison to the United States and Europe. Using the extant literature on professionalization and institutional theory, this study aims to investigate the source of knowledge and professionalism of sports managers in India with limited sport management education programs and to subsequently develop a conceptual framework that addresses any gaps or disparities across regions. This study will contribute to WASM's (2022) mission statement of research practice worldwide, specifically to fill the existing disparities between regions. Additionally, this study may emphasize the value of higher education among professionals entering the workforce in the sport industry. Most importantly, this will be a pioneer study highlighting the social issue of limited sport management higher education programs in India and improving professional research practices. Sport management became a field of study in the 1980s, and scholars have studied its professionalization since this time. Dowling, Edwards, & Washington (2013) suggest that professionalization can be categorized into three broad categories of organizational, systemic, and occupational professionalization. However, scant research has integrated the concept of professionalization with institutional theory. A comprehensive review of the literature reveals that sports industry research is progressing in every country worldwide at its own pace. However, there is very little research evidence about the Indian sports industry and the country's limited higher education sport management programs. A growing need exists for sports scholars to pursue research in developing countries like India to develop theoretical frameworks and academic instruments to evaluate the current standards of qualified professionals in sport management, sport marketing, venue and facilities management, sport governance, and development-related activities. This study may postulate a model highlighting the value of higher education in sports management. Education stakeholders include governments, sports organizations and their representatives, educational institutions, and accrediting bodies. As these stakeholders work collaboratively in developed countries like the United States and Europe and developing countries like India, they simultaneously influence the professionalization (i.e., organizational, systemic, and occupational) of sport management education globally. The results of this quantitative study will investigate the current standards of education in India and the source of knowledge among industry professionals. Sports industry professionals will be randomly selected to complete the COSM survey on PsychData and rate their perceived knowledge and professionalism on a Likert scale. Additionally, they will answer questions involving their competencies, experience, or challenges in contributing to Indian sports management research. Multivariate regression will be used to measure the degree to which the various independent variables impact the current knowledge, contribution to research, and professionalism of India's sports industry professionals. This quantitative study will contribute to the limited academic literature available to Indian sports practitioners. Additionally, it shall synthesize knowledge from previous work on professionalism and institutional knowledge, providing a springboard for new research that will fill the existing knowledge gaps. While a further empirical investigation is warranted, our conceptualization contributes to and highlights India's burgeoning sport management industry.Keywords: sport management, professionalism, source of knowledge, higher education, India
Procedia PDF Downloads 6928 Leading, Teaching and Learning “in the Middle”: Experiences, Beliefs, and Values of Instructional Leaders, Teachers, and Students in Finland, Germany, and Canada
Authors: Brandy Yee, Dianne Yee
Abstract:
Through the exploration of the lived experiences, beliefs and values of instructional leaders, teachers and students in Finland, Germany and Canada, we investigated the factors which contribute to developmentally responsive, intellectually engaging middle-level learning environments for early adolescents. Student-centred leadership dimensions, effective instructional practices and student agency were examined through the lens of current policy and research on middle-level learning environments emerging from the Canadian province of Manitoba. Consideration of these three research perspectives in the context of early adolescent learning, placed against an international backdrop, provided a previously undocumented perspective on leading, teaching and learning in the middle years. Aligning with a social constructivist, qualitative research paradigm, the study incorporated collective case study methodology, along with constructivist grounded theory methods of data analysis. Data were collected through semi-structured individual and focus group interviews and document review, as well as direct and participant observation. Three case study narratives were developed to share the rich stories of study participants, who had been selected using maximum variation and intensity sampling techniques. Interview transcript data were coded using processes from constructivist grounded theory. A cross-case analysis yielded a conceptual framework highlighting key factors that were found to be significant in the establishment of developmentally responsive, intellectually engaging middle-level learning environments. Seven core categories emerged from the cross-case analysis as common to all three countries. Within the visual conceptual framework (which depicts the interconnected nature of leading, teaching and learning in middle-level learning environments), these seven core categories were grouped into Essential Factors (student agency, voice and choice), Contextual Factors (instructional practices; school culture; engaging families and the community), Synergistic Factors (instructional leadership) and Cornerstone Factors (education as a fundamental cultural value; preservice, in-service and ongoing teacher development). In addition, sub-factors emerged from recurring codes in the data and identified specific characteristics and actions found in developmentally responsive, intellectually engaging middle-level learning environments. Although this study focused on 12 schools in Finland, Germany and Canada, it informs the practice of educators working with early adolescent learners in middle-level learning environments internationally. The authentic voices of early adolescent learners are the most important resource educators have to gauge if they are creating effective learning environments for their students. Ongoing professional dialogue and learning is essential to ensure teachers are supported in their work and develop the pedagogical practices needed to meet the needs of early adolescent learners. It is critical to balance consistency, coherence and dependability in the school environment with the necessary flexibility in order to support the unique learning needs of early adolescents. Educators must intentionally create a school culture that unites teachers, students and their families in support of a common purpose, as well as nurture positive relationships between the school and its community. A large, urban school district in Canada has implemented a school cohort-based model to begin to bring developmentally responsive, intellectually engaging middle-level learning environments to scale.Keywords: developmentally responsive learning environments, early adolescents, middle level learning, middle years, instructional leadership, instructional practices, intellectually engaging learning environments, leadership dimensions, student agency
Procedia PDF Downloads 30427 Correlation of Unsuited and Suited 5ᵗʰ Female Hybrid III Anthropometric Test Device Model under Multi-Axial Simulated Orion Abort and Landing Conditions
Authors: Christian J. Kennett, Mark A. Baldwin
Abstract:
As several companies are working towards returning American astronauts back to space on US-made spacecraft, NASA developed a human flight certification-by-test and analysis approach due to the cost-prohibitive nature of extensive testing. This process relies heavily on the quality of analytical models to accurately predict crew injury potential specific to each spacecraft and under dynamic environments not tested. As the prime contractor on the Orion spacecraft, Lockheed Martin was tasked with quantifying the correlation of analytical anthropometric test devices (ATDs), also known as crash test dummies, against test measurements under representative impact conditions. Multiple dynamic impact sled tests were conducted to characterize Hybrid III 5th ATD lumbar, head, and neck responses with and without a modified shuttle-era advanced crew escape suit (ACES) under simulated Orion landing and abort conditions. Each ATD was restrained via a 5-point harness in a mockup Orion seat fixed to a dynamic impact sled at the Wright Patterson Air Force Base (WPAFB) Biodynamics Laboratory in the horizontal impact accelerator (HIA). ATDs were subject to multiple impact magnitudes, half-sine pulse rise times, and XZ - ‘eyeballs out/down’ or Z-axis ‘eyeballs down’ orientations for landing or an X-axis ‘eyeballs in’ orientation for abort. Several helmet constraint devices were evaluated during suited testing. Unique finite element models (FEMs) were developed of the unsuited and suited sled test configurations using an analytical 5th ATD model developed by LSTC (Livermore, CA) and deformable representations of the seat, suit, helmet constraint countermeasures, and body restraints. Explicit FE analyses were conducted using the non-linear solver LS-DYNA. Head linear and rotational acceleration, head rotational velocity, upper neck force and moment, and lumbar force time histories were compared between test and analysis using the enhanced error assessment of response time histories (EEARTH) composite score index. The EEARTH rating paired with the correlation and analysis (CORA) corridor rating provided a composite ISO score that was used to asses model correlation accuracy. NASA occupant protection subject matter experts established an ISO score of 0.5 or greater as the minimum expectation for correlating analytical and experimental ATD responses. Unsuited 5th ATD head X, Z, and resultant linear accelerations, head Y rotational accelerations and velocities, neck X and Z forces, and lumbar Z forces all showed consistent ISO scores above 0.5 in the XZ impact orientation, regardless of peak g-level or rise time. Upper neck Y moments were near or above the 0.5 score for most of the XZ cases. Similar trends were found in the XZ and Z-axis suited tests despite the addition of several different countermeasures for restraining the helmet. For the X-axis ‘eyeballs in’ loading direction, only resultant head linear acceleration and lumbar Z-axis force produced ISO scores above 0.5 whether unsuited or suited. The analytical LSTC 5th ATD model showed good correlation across multiple head, neck, and lumbar responses in both the unsuited and suited configurations when loaded in the XZ ‘eyeballs out/down’ direction. Upper neck moments were consistently the most difficult to predict, regardless of impact direction or test configuration.Keywords: impact biomechanics, manned spaceflight, model correlation, multi-axial loading
Procedia PDF Downloads 11426 Restoring Total Form and Function in Patients with Lower Limb Bony Defects Utilizing Patient-Specific Fused Deposition Modelling- A Neoteric Multidisciplinary Reconstructive Approach
Authors: Divya SY. Ang, Mark B. Tan, Nicholas EM. Yeo, Siti RB. Sudirman, Khong Yik Chew
Abstract:
Introduction: The importance of the amalgamation of technological and engineering advances with surgical principles of reconstruction cannot be overemphasized. With earlier detection of cancer, consequences of high-speed living and neglect, like traumatic injuries and infection, resulting in increasingly younger patients with bone defects. This may result in malformations and suboptimal function that is more noticeable and palpable in the younger, active demographic. Our team proposes a technique that encapsulates a mesh of multidisciplinary effort, tissue engineering and reconstructive principles. Methods/Materials: Our patient was a young competitive footballer in his early 30s who was diagnosed with submandibular adenoid cystic carcinoma with bony involvement. He was thus counselled for a right hemi mandibulectomy, the floor of mouth resection, right selective neck dissection, tracheostomy, and free fibular flap reconstruction of his mandible and required post-operative radiotherapy. Being young and in his prime sportsman years, he was unable to accept the morbidities associated with using his fibula to reconstruct his mandible despite it being the gold standard reconstructive option. The fibula is an ideal vascularized bone flap because it’s reliable and easily shaped with relatively minimal impact on functional outcomes. The fibula contributes to 30% of weightbearing and is the attachment for the lateral compartment muscles; it is stronger in footballers concerning lateral bending. When harvesting the fibula, the distal 6-8cm and up to 10% of the total length is preserved to maintain the ankle’s stability, thus, minimizing the impact on daily activities. There are studies that have noted gait variability post-operatively. Therefore, returning to a premorbid competitive level may be doubtful. To improve his functional outcomes, the decision was made to try and restore the fibula's form and function. Using the concept of Fused Deposition Modelling (FDM), our team comprising of Plastics, Otolaryngology, Orthopedics and Radiology, worked with Osteopore to design a 3D bioresorbable implant to regenerate the fibula defect (14.5cm). Bone marrow was harvested via reaming the contralateral hip prior to the wide resection. 30mls of his blood was obtained for extracting platelet rich plasma. These were packed into the Osteopore 3D-printed bone scaffold. This was then secured into the fibula defect with titanium plates and screws. The flexor hallucis longus and soleus were anchored along the construct and intraosseous membrane, done in a single setting. Results: He was reviewed closely as an outpatient over 10 months post operatively. He reported no discernable loss or difference in ankle function. He is satisfied and back in training and our team has video and photographs that substantiate his progress. Conclusion: FDM allows regeneration of long bone defects. However, we aimed to also restore his eversion and inversion that is imperative for footballers and hence reattached his previously dissected muscles along the length of the Osteopore implant. We believe that the reattachment of the muscle stabilizes not only the construct but allows optimum muscle tensioning when moving his ankle. This is a simple but effective technique in restoring complete function and form in a young patient whose minute muscle control is imperative to life.Keywords: fused deposition modelling, functional reconstruction, lower limb bony defects, regenerative surgery, 3D printing, tissue engineering
Procedia PDF Downloads 7325 Predicting Acceptance and Adoption of Renewable Energy Community solutions: The Prosumer Psychology
Authors: Francois Brambati, Daniele Ruscio, Federica Biassoni, Rebecca Hueting, Alessandra Tedeschi
Abstract:
This research, in the frame of social acceptance of renewable energies and community-based production and consumption models, aims at (1) supporting a data-driven approachable to dealing with climate change and (2) identifying & quantifying the psycho-sociological dimensions and factors that could support the transition from a technology-driven approach to a consumer-driven approach throughout the emerging “prosumer business models.” In addition to the existing Social Acceptance dimensions, this research tries to identify a purely individual psychological fourth dimension to understand processes and factors underling individual acceptance and adoption of renewable energy business models, realizing a Prosumer Acceptance Index. Questionnaire data collection has been performed throughout an online survey platform, combining standardized and ad-hoc questions adapted for the research purposes. To identify the main factors (individual/social) influencing the relation with renewable energy technology (RET) adoption, a Factorial Analysis has been conducted to identify the latent variables that are related to each other, revealing 5 latent psychological factors: Factor 1. Concern about environmental issues: global environmental issues awareness, strong beliefs and pro-environmental attitudes rising concern on environmental issues. Factor 2. Interest in energy sharing: attentiveness to solutions for local community’s collective consumption, to reduce individual environmental impact, sustainably improve the local community, and sell extra energy to the general electricity grid. Factor 3. Concern on climate change: environmental issues consequences on climate change awareness, especially on a global scale level, developing pro-environmental attitudes on global climate change course and sensitivity about behaviours aimed at mitigating such human impact. Factor 4. Social influence: social support seeking from peers. With RET, advice from significant others is looked for internalizing common perceived social norms of the national/geographical region. Factor 5. Impact on bill cost: inclination to adopt a RET when economic incentives from the behaviour perception affect the decision-making process could result in less expensive or unvaried bills. Linear regression has been conducted to identify and quantify the factors that could better predict behavioural intention to become a prosumer. An overall scale measuring “acceptance of a renewable energy solution” was used as the dependent variable, allowing us to quantify the five factors that contribute to measuring: awareness of environmental issues and climate change; environmental attitudes; social influence; and environmental risk perception. Three variables can significantly measure and predict the scores of the “Acceptance in becoming a prosumer” ad hoc scale. Variable 1. Attitude: the agreement to specific environmental issues and global climate change issues of concerns and evaluations towards a behavioural intention. Variable 2. Economic incentive: the perceived behavioural control and its related environmental risk perception, in terms of perceived short-term benefits and long-term costs, both part of the decision-making process as expected outcomes of the behaviour itself. Variable 3. Age: despite fewer economic possibilities, younger adults seem to be more sensitive to environmental dimensions and issues as opposed to older adults. This research can facilitate policymakers and relevant stakeholders to better understand which relevant psycho-sociological factors are intervening in these processes and what and how specifically target when proposing change towards sustainable energy production and consumption.Keywords: behavioural intention, environmental risk perception, prosumer, renewable energy technology, social acceptance
Procedia PDF Downloads 13024 Geomechanics Properties of Tuzluca (Eastern. Turkey) Bedded Rock Salt and Geotechnical Safety
Authors: Mehmet Salih Bayraktutan
Abstract:
Geomechanical properties of Rock Salt Deposits in Tuzluca Salt Mine Area (Eastern Turkey) are studied for modeling the operation- excavation strategy. The purpose of this research focused on calculating the critical value of span height- which will meet the safety requirements. The Mine Site Tuzluca Hills consist of alternating parallel bedding of Salt ( NaCl ) and Gypsum ( CaS04 + 2 H20) rocks. Rock Salt beds are more resistant than narrow Gypsum interlayers. Rock Salt beds formed almost 97 percent of the total height of the Hill. Therefore, the geotechnical safety of Galleries depends on the mechanical criteria of Rock Salt Cores. General deposition of Tuzluca Basin was finally completed by Tuzluca Evaporites, as for the uppermost stratigraphic unit. They are currently running mining operations performed by classic mechanical excavation, room and pillar method. Rooms and Pillars are currently experiencing an initial stage of fracturing in places. Geotechnical safety of the whole mining area evaluated by Rock Mass Rating (RMR), Rock Quality Designation (RQD) spacing of joints, and the interaction of groundwater and fracture system. In general, bedded rock salt Show large lateral deformation capacity (while deformation modulus stays in relative small values, here E= 9.86 GPa). In such litho-stratigraphic environments, creep is a critical mechanism in failure. Rock Salt creep rate in steady-state is greater than interbedding layers. Under long-lasted compressive stresses, creep may cause shear displacements, partly using bedding planes. Eventually, steady-state creep in time returns to accelerated stages. Uniaxial compression creep tests on specimens were performed to have an idea of rock salt strength. To give an idea, on Rock Salt cores, average axial strength and strain are found as 18 - 24 MPa and 0.43-0.45 %, respectively. Uniaxial Compressive strength of 26- 32 MPa, from bedded rock salt cores. Elastic modulus is comparatively low, but lateral deformation of the rock salt is high under the uniaxial compression stress state. Poisson ratio = 0.44, break load = 156 kN, cohesion c= 12.8 kg/cm2, specific gravity SG=2.17 gr/cm3. Fracture System; spacing of fractures, joints, faults, offsets are evaluated under acting geodynamic mechanism. Two sand beds, each 4-6 m thick, exist near to upper level and at the top of the evaporating sequence. They act as aquifers and keep infiltrated water on top for a long duration, which may result in the failure of roofs or pillars. Two major active seismic ( N30W and N70E ) striking Fault Planes and parallel fracture strands have seismically triggered moderate risk of structural deformation of rock salt bedding sequence. Earthquakes and Floods are two prevailing sources of geohazards in this region—the seismotectonic activity of the Mine Site based on the crossing framework of Kagizman Faults and Igdir Faults. Dominant Hazard Risk sources include; a) Weak mechanical properties of rock salt, gypsum, anhydrite beds-creep. b) Physical discontinuities cutting across the thick parallel layers of Evaporite Mass, c) Intercalated beds of weak cemented or loose sand, clayey sandy sediments. On the other hand, absorbing the effects of salt-gyps parallel bedded deposits on seismic wave amplitudes has a reducing effect on the Rock Mass.Keywords: bedded rock salt, creep, failure mechanism, geotechnical safety
Procedia PDF Downloads 19023 Crowdfunding: Could it be Beneficial to Social Entrepreneurship
Authors: Berrachid Dounia, Bellihi Hassan
Abstract:
The financial crisis made a barrier in front of small projects that are looking for funding, but in the other hand it has had at least an interesting side effect which is the rise of alternative and increasingly creative forms of financing. The traditional forms of financing has known a recession due to the new difficult situation of economical recession that all parts of the world have known. Having an innovating idea that has an effect on both sides, the economic one and social one is very beneficial for those who wants to get rid of the economical crisis. In this case, entrepreneurs who want to be successful are looking for the means of financing that are going to get their projects to the reality. The financing could be various, whether the entrepreneur can use his own resources, or go to the three “Fs”(Family, friends, and fools),look for Angel Investors, or try for the academic solution like universities and private incubators, but sometimes, entrepreneurs feels uncomfortable about those means and start looking to newer, less traditional forms of financing their projects. In the last few years, people have shown a great interest to the use of internet for many reasons (information, social networking, communication, entertainment, transaction, etc.). The use of internet facilitates relations between people and eases the maintenance of existing relationships ,it increases also the number of exchanges which leads to a “collective creativity”, moreover, internet gives an opportunity to create new tool for mobilizing civil society, which makes the participation in a project company much easier. The new atmosphere of business forces the project leaders to look for new solution of financing that cut out the financial intermediaries. Using platforms in order to finance projects is an alternative that is changing the traditional solutions of financing projects. New creative ways of lending money appears like Peer to Peer (person to person or P2P)lending. This digital directly intermediary got his origins from microcredit principles. Crowdfunding also, like P2P, involves getting individuals to pool their resources to finance a project without a typical financial intermediary. For Lambert and Schwienbacher "Crowdfunding involves an open call, essentially through the Internet, for the provision of financial resources either in the form of donations (without rewards) or in exchange for some form of reward and/or voting rights in order to support initiatives for specific purposes". The idea of this proposal for investors and entrepreneurs is to encourage small contributions from a large number of funders "the crowd" in order to raise money to fund projects. All those conditions made from crowdfunding a useful alternative to project leaders, and especially the ones who are carrying special ideas that need special funds. As mentioned before by Laflamme. S. et Lafortune. S. internet is a tool for mobilizing civil society. In our case, the crowdfunding is the tool that funds social entrepreneurship, in the case of not for profit organizations, it focuses his attention on social problems which could be resolved by mobilizing different resources, creating innovative initiatives, and building new social arrangements which call up the civil society. Social entrepreneurs are mostly the ones who goes onto crowdfunding web site, so they propose the amount which is expected to realize their project and then they receive the funds from crowd funders. Something the crowd funders expect something in return, like a product from the business (a sample from a product (case of a cooperative) or a CD (in the case of films or songs)), but not their money back. Thus, we cannot say that their lands are donations, because a donator did not expect anything back. However, in order to encourage "crowd-funders", rewards motivates people to get interested by projects and made some money from internet. The operation of crowd funding is making all parts satisfied investors, entrepreneurs and also crowdfunding sites owners. This paper aims to give a view of the mechanism of crowdfunding, by clarifying the techniques and its different categories, and social entrepreneurship as a sponsor of social development. Also, it aims to show how this alternative of financing could be beneficial for social entrepreneurs and how it is bringing a solution to fund social projects. The article concludes with a discussion of the contribution of crowdfunding in social entrepreneurship especially in the Moroccan context.Keywords: crowd-funding, social entrepreneurship, projects funding, financing
Procedia PDF Downloads 37822 Extracellular Polymeric Substances Study in an MBR System for Fouling Control
Authors: Dimitra C. Banti, Gesthimani Liona, Petros Samaras, Manasis Mitrakas
Abstract:
Municipal and industrial wastewaters are often treated biologically, by the activated sludge process (ASP). The ASP not only requires large aeration and sedimentation tanks, but also generates large quantities of excess sludge. An alternative technology is the membrane bioreactor (MBR), which replaces two stages of the conventional ASP—clarification and settlement—with a single, integrated biotreatment and clarification step. The advantages offered by the MBR over conventional treatment include reduced footprint and sludge production through maintaining a high biomass concentration in the bioreactor. Notwithstanding these advantages, the widespread application of the MBR process is constrained by membrane fouling. Fouling leads to permeate flux decline, making more frequent membrane cleaning and replacement necessary and resulting to increased operating costs. In general, membrane fouling results from the interaction between the membrane material and the components in the activated sludge liquor. The latter includes substrate components, cells, cell debris and microbial metabolites, such as Extracellular Polymeric Substances (EPS) and Sludge Microbial Products (SMPs). The challenge for effective MBR operation is to minimize the rate of Transmembrane Pressure (TMP) increase. This can be achieved by several ways, one of which is the addition of specific additives, that enhance the coagulation and flocculation of compounds, which are responsible for fouling, hence reducing biofilm formation on the membrane surface and limiting the fouling rate. In this project the effectiveness of a non-commercial composite coagulant was studied as an agent for fouling control in a lab scale MBR system consisting in two aerated tanks. A flat sheet membrane module with 0.40 um pore size was submerged into the second tank. The system was fed by50 L/d of municipal wastewater collected from the effluent of the primary sedimentation basin. The TMP increase rate, which is directly related to fouling growth, was monitored by a PLC system. EPS, MLSS and MLVSS measurements were performed in samples of mixed liquor; in addition, influent and effluent samples were collected for the determination of physicochemical characteristics (COD, BOD5, NO3-N, NH4-N, Total N and PO4-P). The coagulant was added in concentrations 2, 5 and 10mg/L during a period of 2 weeks and the results were compared with the control system (without coagulant addition). EPS fractions were extracted by a three stages physical-thermal treatment allowing the identification of Soluble EPS (SEPS) or SMP, Loosely Bound EPS (LBEPS) and Tightly Bound EPS (TBEPS). Proteins and carbohydrates concentrations were measured in EPS fractions by the modified Lowry method and Dubois method, respectively. Addition of 2 mg/L coagulant concentration did not affect SEPS proteins in comparison with control process and their values varied between 32 to 38mg/g VSS. However a coagulant dosage of 5mg/L resulted in a slight increase of SEPS proteins at 35-40 mg/g VSS while 10mg/L coagulant further increased SEPS to 44-48mg/g VSS. Similar results were obtained for SEPS carbohydrates. Carbohydrates values without coagulant addition were similar to the corresponding values measured for 2mg/L coagulant; the addition of mg/L coagulant resulted to a slight increase of carbohydrates SEPS to 6-7mg/g VSS while a dose of 10 mg/L further increased carbohydrates content to 9-10mg/g VSS. Total LBEPS and TBEPS, consisted of proteins and carbohydrates of LBEPS and TBEPS respectively, presented similar variations by the addition of the coagulant. Total LBEPS at 2mg/L dose were almost equal to 17mg/g VSS, and their values increased to 22 and 29 mg/g VSS during the addition of 5 mg/L and 10 mg/L of coagulant respectively. Total TBEPS were almost 37 mg/g VSS at a coagulant dose of 2 mg/L and increased to 42 and 51 mg/g VSS at 5 mg/L and 10 mg/L doses, respectively. Therefore, it can be concluded that coagulant addition could potentially affect microorganisms activities, excreting EPS in greater amounts. Nevertheless, EPS increase, mainly SEPS increase, resulted to a higher membrane fouling rate, as justified by the corresponding TMP increase rate. However, the addition of the coagulant, although affected the EPS content in the reactor mixed liquor, did not change the filtration process: an effluent of high quality was produced, with COD values as low as 20-30 mg/L.Keywords: extracellular polymeric substances, MBR, membrane fouling, EPS
Procedia PDF Downloads 26821 Location3: A Location Scouting Platform for the Support of Film and Multimedia Industries
Authors: Dimitrios Tzilopoulos, Panagiotis Symeonidis, Michael Loufakis, Dimosthenis Ioannidis, Dimitrios Tzovaras
Abstract:
The domestic film industry in Greece has traditionally relied heavily on state support. While film productions are crucial for the country's economy, it has not fully capitalized on attracting and promoting foreign productions. The lack of motivation, organized state support for attraction and licensing, and the absence of location scouting have hindered its potential. Although recent legislative changes have addressed the first two of these issues, the development of a comprehensive location database and a search engine that would effectively support location scouting at the pre-production location scouting is still in its early stages. In addition to the expected benefits of the film, television, marketing, and multimedia industries, a location-scouting service platform has the potential to yield significant financial gains locally and nationally. By promoting featured places like cultural and archaeological sites, natural monuments, and attraction points for visitors, it plays a vital role in both cultural promotion and facilitating tourism development. This study introduces LOCATION3, an internet platform revolutionizing film production location management. It interconnects location providers, film crews, and multimedia stakeholders, offering a comprehensive environment for seamless collaboration. The platform's central geodatabase (PostgreSQL) stores each location’s attributes, while web technologies like HTML, JavaScript, CSS, React.js, and Redux power the user-friendly interface. Advanced functionalities, utilizing deep learning models, developed in Python, are integrated via Node.js. Visual data presentation is achieved using the JS Leaflet library, delivering an interactive map experience. LOCATION3 sets a new standard, offering a range of essential features to enhance the management of film production locations. Firstly, it empowers users to effortlessly upload audiovisual material enriched with geospatial and temporal data, such as location coordinates, photographs, videos, 360-degree panoramas, and 3D location models. With the help of cutting-edge deep learning algorithms, the application automatically tags these materials, while users can also manually tag them. Moreover, the application allows users to record locations directly through its user-friendly mobile application. Users can then embark on seamless location searches, employing spatial or descriptive criteria. This intelligent search functionality considers a combination of relevant tags, dominant colors, architectural characteristics, emotional associations, and unique location traits. One of the application's standout features is the ability to explore locations by their visual similarity to other materials, facilitated by a reverse image search. Also, the interactive map serves as both a dynamic display for locations and a versatile filter, adapting to the user's preferences and effortlessly enhancing location searches. To further streamline the process, the application facilitates the creation of location lightboxes, enabling users to efficiently organize and share their content via email. Going above and beyond location management, the platform also provides invaluable liaison, matchmaking, and online marketplace services. This powerful functionality bridges the gap between visual and three-dimensional geospatial material providers, local agencies, film companies, production companies, etc. so that those interested in a specific location can access additional material beyond what is stored on the platform, as well as access production services supporting the functioning and completion of productions in a location (equipment provision, transportation, catering, accommodation, etc.).Keywords: deep learning models, film industry, geospatial data management, location scouting
Procedia PDF Downloads 7120 Computational Fluid Dynamics Simulation of a Nanofluid-Based Annular Solar Collector with Different Metallic Nano-Particles
Authors: Sireetorn Kuharat, Anwar Beg
Abstract:
Motivation- Solar energy constitutes the most promising renewable energy source on earth. Nanofluids are a very successful family of engineered fluids, which contain well-dispersed nanoparticles suspended in a stable base fluid. The presence of metallic nanoparticles (e.g. gold, silver, copper, aluminum etc) significantly improves the thermo-physical properties of the host fluid and generally results in a considerable boost in thermal conductivity, density, and viscosity of nanofluid compared with the original base (host) fluid. This modification in fundamental thermal properties has profound implications in influencing the convective heat transfer process in solar collectors. The potential for improving solar collector direct absorber efficiency is immense and to gain a deeper insight into the impact of different metallic nanoparticles on efficiency and temperature enhancement, in the present work, we describe recent computational fluid dynamics simulations of an annular solar collector system. The present work studies several different metallic nano-particles and compares their performance. Methodologies- A numerical study of convective heat transfer in an annular pipe solar collector system is conducted. The inner tube contains pure water and the annular region contains nanofluid. Three-dimensional steady-state incompressible laminar flow comprising water- (and other) based nanofluid containing a variety of metallic nanoparticles (copper oxide, aluminum oxide, and titanium oxide nanoparticles) is examined. The Tiwari-Das model is deployed for which thermal conductivity, specific heat capacity and viscosity of the nanofluid suspensions is evaluated as a function of solid nano-particle volume fraction. Radiative heat transfer is also incorporated using the ANSYS solar flux and Rosseland radiative models. The ANSYS FLUENT finite volume code (version 18.1) is employed to simulate the thermo-fluid characteristics via the SIMPLE algorithm. Mesh-independence tests are conducted. Validation of the simulations is also performed with a computational Harlow-Welch MAC (Marker and Cell) finite difference method and excellent correlation achieved. The influence of volume fraction on temperature, velocity, pressure contours is computed and visualized. Main findings- The best overall performance is achieved with copper oxide nanoparticles. Thermal enhancement is generally maximized when water is utilized as the base fluid, although in certain cases ethylene glycol also performs very efficiently. Increasing nanoparticle solid volume fraction elevates temperatures although the effects are less prominent in aluminum and titanium oxide nanofluids. Significant improvement in temperature distributions is achieved with copper oxide nanofluid and this is attributed to the superior thermal conductivity of copper compared to other metallic nano-particles studied. Important fluid dynamic characteristics are also visualized including circulation and temperature shoots near the upper region of the annulus. Radiative flux is observed to enhance temperatures significantly via energization of the nanofluid although again the best elevation in performance is attained consistently with copper oxide. Conclusions-The current study generalizes previous investigations by considering multiple metallic nano-particles and furthermore provides a good benchmark against which to calibrate experimental tests on a new solar collector configuration currently being designed at Salford University. Important insights into the thermal conductivity and viscosity with metallic nano-particles is also provided in detail. The analysis is also extendable to other metallic nano-particles including gold and zinc.Keywords: heat transfer, annular nanofluid solar collector, ANSYS FLUENT, metallic nanoparticles
Procedia PDF Downloads 14319 Non Pharmacological Approach to IBS (Irritable Bowel Syndrome)
Authors: A. Aceranti, L. Moretti, S. Vernocchi, M. Colorato, P. Caristia
Abstract:
Irritable bowel syndrome (IBS) is the association between abdominal pain, abdominal distension and intestinal dysfunction for recurring periods. About 10% of the world's population has IBS at any given time in their life, and about 200 people per 100,000 receive an initial diagnosis of IBS each year. Persistent pain is recognized as one of the most pervasive and challenging problems facing the medical community today. Persistent pain is considered more as a complex pathophysiological, diagnostic and therapeutic situation rather than as a persistent symptom. The low efficiency of conventional drug treatments has led many doctors to become interested in the non-drug alternative treatment of IBS, especially for more severe cases. Patients and providers are often dissatisfied with the available drug remedies and often seek complementary and alternative medicine (CAM), a unique and holistic approach to treatment that is not a typical component of conventional medicine. Osteopathic treatment may be of specific interest in patients with IBS. Osteopathy is a complementary health approach that emphasizes the role of the musculoskeletal system in health and promotes optimal function of the body's tissues using a variety of manual techniques to improve body function. Osteopathy has been defined as a patient-centered health discipline based on the principles of interrelation between body structure and function, the body's innate capacity for self-healing and the adoption of a whole person health approach. mainly by practicing manual processing. Studies reported that osteopathic manual treatment (OMT) reduced IBS symptoms, such as abdominal pain, constipation, diarrhea, and improved general well-being. The focus in the treatment of IBS with osteopathy has gone beyond simple spinal alignment, to directly address the abnormal physiology of the body using a series of direct and indirect techniques. The topic of this study was chosen for different reasons: due to the large number of people involved who suffer from this disorder and for the dysfunction itself, since nowadays there is still little clarity about the best type of treatment and, above all, to its origin. The visceral component in the osteopathic field is still a world to be discovered, although it is related to a large part of patient series, it has contents that affect numerous disciplines and this makes it an enigma yet to be solved. The study originated in the didactic practice where the curiosity of a topic is marked that, even today, no one is able to explain and, above all, cure definitively. The main purpose of this study is to try to create a good basis on the osteopathic discipline for subsequent studies that can be exhaustive in the best possible way, resolving some doubts about which treatment modality can be used with more relevance. The path was decided to structure it in such a way that 3 types of osteopathic treatment are used on 3 groups of people who will be selected after completing a questionnaire, which will deem them suitable for the study. They will, in fact, be divided into three groups where: - the first group was given a visceral osteopathic treatment. - The second group was given a manual osteopathic treatment of neurological stimulation. - The third group received a placebo treatment. At the end of the treatment, questionnaires will be re-proposed respectively one week after the session and one month after the treatment from which any data will be collected that will demonstrate the effectiveness or otherwise of the treatment received. The sample of 50 patients examined underwent an oral interview to evaluate the inclusion and exclusion criteria to participate in the study. Of the 50 patients questioned, 17 people who underwent different osteopathic techniques were eligible for the study. Comparing the data related to the first assessment of tenderness and frequency of symptoms with the data related to the first follow-up shows a significant improvement in the score assigned to the different questions, especially in the neurogenic and visceral groups. We are aware of the fact that it is a study performed on a small sample of patients, and this is a penalizing factor. We remain, however, convinced that having obtained good results in terms of subjective improvement in the quality of life of the subjects, it would be very interesting to re-propose the study on a larger sample and fill the gaps.Keywords: IBS, osteopathy, colon, intestinal inflammation
Procedia PDF Downloads 10118 Structural Characteristics of HPDSP Concrete on Beam Column Joints
Authors: Hari Krishan Sharma, Sanjay Kumar Sharma, Sushil Kumar Swar
Abstract:
Inadequate transverse reinforcement is considered as the main reason for the beam column joint shear failure observed during recent earthquakes. DSP matrix consists of cement and high content of micro-silica with low water to cement ratio while the aggregates are graded quartz sand. The use of reinforcing fibres leads not only to the increase of tensile/bending strength and specific fracture energy, but also to reduction of brittleness and, consequently, to production of non-explosive ruptures. Besides, fibre-reinforced materials are more homogeneous and less sensitive to small defects and flaws. Recent works on the freeze-thaw durability (also in the presence of de-icing salts) of fibre-reinforced DSP confirm the excellent behaviour in the expected long term service life.DSP materials, including fibre-reinforced DSP and CRC (Compact Reinforced Composites) are obtained by using high quantities of super plasticizers and high volumes of micro-silica. Steel fibres with high tensile yield strength of smaller diameter and short length in different fibre volume percentage and aspect ratio tilized to improve the performance by reducing the brittleness of matrix material. In the case of High Performance Densified Small Particle Concrete (HPDSPC), concrete is dense at the micro-structure level, tensile strain would be much higher than that of the conventional SFRC, SIFCON & SIMCON. Beam-column sub-assemblages used as moment resisting constructed using HPDSPC in the joint region with varying quantities of steel fibres, fibre aspect ratio and fibre orientation in the critical section. These HPDSPC in the joint region sub-assemblages tested under cyclic/earthquake loading. Besides loading measurements, frame displacements, diagonal joint strain and rebar strain adjacent to the joint will also be measured to investigate stress-strain behaviour, load deformation characteristics, joint shear strength, failure mechanism, ductility associated parameters, stiffness and energy dissipated parameters of the beam column sub-assemblages also evaluated. Finally a design procedure for the optimum design of HPDSPC corresponding to moment, shear forces and axial forces for the reinforced concrete beam-column joint sub-assemblage proposed. The fact that the implementation of material brittleness measure in the design of RC structures can improve structural reliability by providing uniform safety margins over a wide range of structural sizes and material compositions well recognized in the structural design and research. This lead to the development of high performance concrete for the optimized combination of various structural ratios in concrete for the optimized combination of various structural properties. The structural applications of HPDSPC, because of extremely high strength, will reduce dead load significantly as compared to normal weight concrete thereby offering substantial cost saving and by providing improved seismic response, longer spans, and thinner sections, less reinforcing steel and lower foundation cost. These cost effective parameters will make this material more versatile for use in various structural applications like beam-column joints in industries, airports, parking areas, docks, harbours, and also containers for hazardous material, safety boxes and mould & tools for polymer composites and metals.Keywords: high performance densified small particle concrete (HPDSPC), steel fibre reinforced concrete (SFRC), slurry infiltrated concrete (SIFCON), Slurry infiltrated mat concrete (SIMCON)
Procedia PDF Downloads 30317 Utilization of Developed Single Sequence Repeats Markers for Dalmatian Pyrethrum (Tanacetum cinerariifolium) in Preliminary Genetic Diversity Study on Natural Populations
Authors: F. Varga, Z. Liber, J. Jakše, A. Turudić, Z. Šatović, I. Radosavljević, N. Jeran, M. Grdiša
Abstract:
Dalmatian pyrethrum (Tanacetum cinerariifolium (Trevir.) Sch. Bip.; Asteraceae), a source of the commercially dominant plant insecticide pyrethrin, is a species endemic to the eastern Adriatic. Genetic diversity of T. cinerariifolium was previously studied using amplified fragment length polymorphism (AFLP) markers. However, microsatellite markers (single sequence repeats - SSRs) are more informative because they are codominant, highly polymorphic, locus-specific, and more reproducible, and thus are most often used to assess the genetic diversity of plant species. Dalmatian pyrethrum is an outcrossing diploid (2n = 18) whose large genome size and high repeatability have prevented the success of the traditional approach to SSR markers development. The advent of next-generation sequencing combined with the specifically developed method recently enabled the development of, to the author's best knowledge, the first set of SSRs for genomic characterization of Dalmatian pyrethrum, which is essential from the perspective of plant genetic resources conservation. To evaluate the effectiveness of the developed SSR markers in genetic differentiation of Dalmatian pyrethrum populations, a preliminary genetic diversity study was conducted on 30 individuals from three geographically distinct natural populations in Croatia (northern Adriatic island of Mali Lošinj, southern Adriatic island of Čiovo, and Mount Biokovo) based on 12 SSR loci. Analysis of molecular variance (AMOVA) by randomization test with 10,000 permutations was performed in Arlequin 3.5. The average number of alleles per locus, observed and expected heterozygosity, probability of deviations from Hardy-Weinberg equilibrium, and inbreeding coefficient was calculated using GENEPOP 4.4. Genetic distance based on the proportion of common alleles (DPSA) was calculated using MICROSAT. Cluster analysis using the neighbor-joining method with 1,000 bootstraps was performed with PHYLIP to generate a dendrogram. The results of the AMOVA analysis showed that the total SSR diversity was 23% within and 77% between the three populations. A slight deviation from Hardy-Weinberg equilibrium was observed in the Mali Lošinj population. Allele richness ranged from 2.92 to 3.92, with the highest number of private alleles observed in the Mali Lošinj population (17). The average observed DPSA between 30 individuals was 0.557. The highest DPSA (0.875) was observed between several pairs of Dalmatian pyrethrum individuals from the Mali Lošinj and Mt. Biokovo populations, and the lowest between two individuals from the Čiovo population. Neighbor-joining trees, based on DPSA, grouped individuals into clusters according to their population affiliation. The separation of Mt. Biokovo clade was supported (bootstrap value 58%), which is consistent with the previous study on AFLP markers, where isolated populations from Mt. Biokovo differed from the rest of the populations. The developed SSR markers are an effective tool for assessing the genetic diversity and structure of natural Dalmatian pyrethrum populations. These preliminary results are encouraging for a future comprehensive study with a larger sample size across the species' range. Combined with the biochemical data, these highly informative markers could help identify potential genotypes of interest for future development of breeding lines and cultivars that are both resistant to environmental stress and high in pyrethrins. Acknowledgment: This work has been supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip.) insecticidal potential’- (PyrDiv) (IP-06-2016-9034) and by project KK.01.1.1.01.0005, Biodiversity and Molecular Plant Breeding, at the Centre of Excellence for Biodiversity and Molecular Plant Breeding (CoE CroP-BioDiv), Zagreb, Croatia.Keywords: Asteraceae, genetic diversity, genomic SSRs, NGS, pyrethrum, Tanacetum cinerariifolium
Procedia PDF Downloads 11416 Assessing Diagnostic and Evaluation Tools for Use in Urban Immunisation Programming: A Critical Narrative Review and Proposed Framework
Authors: Tim Crocker-Buque, Sandra Mounier-Jack, Natasha Howard
Abstract:
Background: Due to both the increasing scale and speed of urbanisation, urban areas in low and middle-income countries (LMICs) host increasingly large populations of under-immunized children, with the additional associated risks of rapid disease transmission in high-density living environments. Multiple interdependent factors are associated with these coverage disparities in urban areas and most evidence comes from relatively few countries, e.g., predominantly India, Kenya, Nigeria, and some from Pakistan, Iran, and Brazil. This study aimed to identify, describe, and assess the main tools used to measure or improve coverage of immunisation services in poor urban areas. Methods: Authors used a qualitative review design, including academic and non-academic literature, to identify tools used to improve coverage of public health interventions in urban areas. Authors selected and extracted sources that provided good examples of specific tools, or categories of tools, used in a context relevant to urban immunization. Diagnostic (e.g., for data collection, analysis, and insight generation) and programme tools (e.g., for investigating or improving ongoing programmes) and interventions (e.g., multi-component or stand-alone with evidence) were selected for inclusion to provide a range of type and availability of relevant tools. These were then prioritised using a decision-analysis framework and a tool selection guide for programme managers developed. Results: Authors reviewed tools used in urban immunisation contexts and tools designed for (i) non-immunization and/or non-health interventions in urban areas, and (ii) immunisation in rural contexts that had relevance for urban areas (e.g., Reaching every District/Child/ Zone). Many approaches combined several tools and methods, which authors categorised as diagnostic, programme, and intervention. The most common diagnostic tools were cross-sectional surveys, key informant interviews, focus group discussions, secondary analysis of routine data, and geographical mapping of outcomes, resources, and services. Programme tools involved multiple stages of data collection, analysis, insight generation, and intervention planning and included guidance documents from WHO (World Health Organisation), UNICEF (United Nations Children's Fund), USAID (United States Agency for International Development), and governments, and articles reporting on diagnostics, interventions, and/or evaluations to improve urban immunisation. Interventions involved service improvement, education, reminder/recall, incentives, outreach, mass-media, or were multi-component. The main gaps in existing tools were an assessment of macro/policy-level factors, exploration of effective immunization communication channels, and measuring in/out-migration. The proposed framework uses a problem tree approach to suggest tools to address five common challenges (i.e. identifying populations, understanding communities, issues with service access and use, improving services, improving coverage) based on context and available data. Conclusion: This study identified many tools relevant to evaluating urban LMIC immunisation programmes, including significant crossover between tools. This was encouraging in terms of supporting the identification of common areas, but problematic as data volumes, instructions, and activities could overwhelm managers and tools are not always suitably applied to suitable contexts. Further research is needed on how best to combine tools and methods to suit local contexts. Authors’ initial framework can be tested and developed further.Keywords: health equity, immunisation, low and middle-income countries, poverty, urban health
Procedia PDF Downloads 13915 Blockchain Based Hydrogen Market (BBH₂): A Paradigm-Shifting Innovative Solution for Climate-Friendly and Sustainable Structural Change
Authors: Volker Wannack
Abstract:
Regional, national, and international strategies focusing on hydrogen (H₂) and blockchain are driving significant advancements in hydrogen and blockchain technology worldwide. These strategies lay the foundation for the groundbreaking "Blockchain Based Hydrogen Market (BBH₂)" project. The primary goal of this project is to develop a functional Blockchain Minimum Viable Product (B-MVP) for the hydrogen market. The B-MVP will leverage blockchain as an enabling technology with a common database and platform, facilitating secure and automated transactions through smart contracts. This innovation will revolutionize logistics, trading, and transactions within the hydrogen market. The B-MVP has transformative potential across various sectors. It benefits renewable energy producers, surplus energy-based hydrogen producers, hydrogen transport and distribution grid operators, and hydrogen consumers. By implementing standardized, automated, and tamper-proof processes, the B-MVP enhances cost efficiency and enables transparent and traceable transactions. Its key objective is to establish the verifiable integrity of climate-friendly "green" hydrogen by tracing its supply chain from renewable energy producers to end users. This emphasis on transparency and accountability promotes economic, ecological, and social sustainability while fostering a secure and transparent market environment. A notable feature of the B-MVP is its cross-border operability, eliminating the need for country-specific data storage and expanding its global applicability. This flexibility not only broadens its reach but also creates opportunities for long-term job creation through the establishment of a dedicated blockchain operating company. By attracting skilled workers and supporting their training, the B-MVP strengthens the workforce in the growing hydrogen sector. Moreover, it drives the emergence of innovative business models that attract additional company establishments and startups and contributes to long-term job creation. For instance, data evaluation can be utilized to develop customized tariffs and provide demand-oriented network capacities to producers and network operators, benefitting redistributors and end customers with tamper-proof pricing options. The B-MVP not only brings technological and economic advancements but also enhances the visibility of national and international standard-setting efforts. Regions implementing the B-MVP become pioneers in climate-friendly, sustainable, and forward-thinking practices, generating interest beyond their geographic boundaries. Additionally, the B-MVP serves as a catalyst for research and development, facilitating knowledge transfer between universities and companies. This collaborative environment fosters scientific progress, aligns with strategic innovation management, and cultivates an innovation culture within the hydrogen market. Through the integration of blockchain and hydrogen technologies, the B-MVP promotes holistic innovation and contributes to a sustainable future in the hydrogen industry. The implementation process involves evaluating and mapping suitable blockchain technology and architecture, developing and implementing the blockchain, smart contracts, and depositing certificates of origin. It also includes creating interfaces to existing systems such as nomination, portfolio management, trading, and billing systems, testing the scalability of the B-MVP to other markets and user groups, developing data formats for process-relevant data exchange, and conducting field studies to validate the B-MVP. BBH₂ is part of the "Technology Offensive Hydrogen" funding call within the research funding of the Federal Ministry of Economics and Climate Protection in the 7th Energy Research Programme of the Federal Government.Keywords: hydrogen, blockchain, sustainability, innovation, structural change
Procedia PDF Downloads 16814 XAI Implemented Prognostic Framework: Condition Monitoring and Alert System Based on RUL and Sensory Data
Authors: Faruk Ozdemir, Roy Kalawsky, Peter Hubbard
Abstract:
Accurate estimation of RUL provides a basis for effective predictive maintenance, reducing unexpected downtime for industrial equipment. However, while models such as the Random Forest have effective predictive capabilities, they are the so-called ‘black box’ models, where interpretability is at a threshold to make critical diagnostic decisions involved in industries related to aviation. The purpose of this work is to present a prognostic framework that embeds Explainable Artificial Intelligence (XAI) techniques in order to provide essential transparency in Machine Learning methods' decision-making mechanisms based on sensor data, with the objective of procuring actionable insights for the aviation industry. Sensor readings have been gathered from critical equipment such as turbofan jet engine and landing gear, and the prediction of the RUL is done by a Random Forest model. It involves steps such as data gathering, feature engineering, model training, and evaluation. These critical components’ datasets are independently trained and evaluated by the models. While suitable predictions are served, their performance metrics are reasonably good; such complex models, however obscure reasoning for the predictions made by them and may even undermine the confidence of the decision-maker or the maintenance teams. This is followed by global explanations using SHAP and local explanations using LIME in the second phase to bridge the gap in reliability within industrial contexts. These tools analyze model decisions, highlighting feature importance and explaining how each input variable affects the output. This dual approach offers a general comprehension of the overall model behavior and detailed insight into specific predictions. The proposed framework, in its third component, incorporates the techniques of causal analysis in the form of Granger causality tests in order to move beyond correlation toward causation. This will not only allow the model to predict failures but also present reasons, from the key sensor features linked to possible failure mechanisms to relevant personnel. The causality between sensor behaviors and equipment failures creates much value for maintenance teams due to better root cause identification and effective preventive measures. This step contributes to the system being more explainable. Surrogate Several simple models, including Decision Trees and Linear Models, can be used in yet another stage to approximately represent the complex Random Forest model. These simpler models act as backups, replicating important jobs of the original model's behavior. If the feature explanations obtained from the surrogate model are cross-validated with the primary model, the insights derived would be more reliable and provide an intuitive sense of how the input variables affect the predictions. We then create an iterative explainable feedback loop, where the knowledge learned from the explainability methods feeds back into the training of the models. This feeds into a cycle of continuous improvement both in model accuracy and interpretability over time. By systematically integrating new findings, the model is expected to adapt to changed conditions and further develop its prognosis capability. These components are then presented to the decision-makers through the development of a fully transparent condition monitoring and alert system. The system provides a holistic tool for maintenance operations by leveraging RUL predictions, feature importance scores, persistent sensor threshold values, and autonomous alert mechanisms. Since the system will provide explanations for the predictions given, along with active alerts, the maintenance personnel can make informed decisions on their end regarding correct interventions to extend the life of the critical machinery.Keywords: predictive maintenance, explainable artificial intelligence, prognostic, RUL, machine learning, turbofan engines, C-MAPSS dataset
Procedia PDF Downloads 613 Full Characterization of Heterogeneous Antibody Samples under Denaturing and Native Conditions on a Hybrid Quadrupole-Orbitrap Mass Spectrometer
Authors: Rowan Moore, Kai Scheffler, Eugen Damoc, Jennifer Sutton, Aaron Bailey, Stephane Houel, Simon Cubbon, Jonathan Josephs
Abstract:
Purpose: MS analysis of monoclonal antibodies (mAbs) at the protein and peptide levels is critical during development and production of biopharmaceuticals. The compositions of current generation therapeutic proteins are often complex due to various modifications which may affect efficacy. Intact proteins analyzed by MS are detected in higher charge states that also provide more complexity in mass spectra. Protein analysis in native or native-like conditions with zero or minimal organic solvent and neutral or weakly acidic pH decreases charge state value resulting in mAb detection at higher m/z ranges with more spatial resolution. Methods: Three commercially available mAbs were used for all experiments. Intact proteins were desalted online using size exclusion chromatography (SEC) or reversed phase chromatography coupled on-line with a mass spectrometer. For streamlined use of the LC- MS platform we used a single SEC column and alternately selected specific mobile phases to perform separations in either denaturing or native-like conditions: buffer A (20 % ACN, 0.1 % FA) with Buffer B (100 mM ammonium acetate). For peptide analysis mAbs were proteolytically digested with and without prior reduction and alkylation. The mass spectrometer used for all experiments was a commercially available Thermo Scientific™ hybrid Quadrupole-Orbitrap™ mass spectrometer, equipped with the new BioPharma option which includes a new High Mass Range (HMR) mode that allows for improved high mass transmission and mass detection up to 8000 m/z. Results: We have analyzed the profiles of three mAbs under reducing and native conditions by direct infusion with offline desalting and with on-line desalting via size exclusion and reversed phase type columns. The presence of high salt under denaturing conditions was found to influence the observed charge state envelope and impact mass accuracy after spectral deconvolution. The significantly lower charge states observed under native conditions improves the spatial resolution of protein signals and has significant benefits for the analysis of antibody mixtures, e.g. lysine variants, degradants or sequence variants. This type of analysis requires the detection of masses beyond the standard mass range ranging up to 6000 m/z requiring the extended capabilities available in the new HMR mode. We have compared each antibody sample that was analyzed individually with mixtures in various relative concentrations. For this type of analysis, we observed that apparent native structures persist and ESI is benefited by the addition of low amounts of acetonitrile and formic acid in combination with the ammonium acetate-buffered mobile phase. For analyses on the peptide level we analyzed reduced/alkylated, and non-reduced proteolytic digests of the individual antibodies separated via reversed phase chromatography aiming to retrieve as much information as possible regarding sequence coverage, disulfide bridges, post-translational modifications such as various glycans, sequence variants, and their relative quantification. All data acquired were submitted to a single software package for analysis aiming to obtain a complete picture of the molecules analyzed. Here we demonstrate the capabilities of the mass spectrometer to fully characterize homogeneous and heterogeneous therapeutic proteins on one single platform. Conclusion: Full characterization of heterogeneous intact protein mixtures by improved mass separation on a quadrupole-Orbitrap™ mass spectrometer with extended capabilities has been demonstrated.Keywords: disulfide bond analysis, intact analysis, native analysis, mass spectrometry, monoclonal antibodies, peptide mapping, post-translational modifications, sequence variants, size exclusion chromatography, therapeutic protein analysis, UHPLC
Procedia PDF Downloads 36112 Unleashing Potential in Pedagogical Innovation for STEM Education: Applying Knowledge Transfer Technology to Guide a Co-Creation Learning Mechanism for the Lingering Effects Amid COVID-19
Authors: Lan Cheng, Harry Qin, Yang Wang
Abstract:
Background: COVID-19 has induced the largest digital learning experiment in history. There is also emerging research evidence that students have paid a high cost of learning loss from virtual learning. University-wide survey results demonstrate that digital learning remains difficult for students who struggle with learning challenges, isolation, or a lack of resources. Large-scale efforts are therefore increasingly utilized for digital education. To better prepare students in higher education for this grand scientific and technological transformation, STEM education has been prioritized and promoted as a strategic imperative in the ongoing curriculum reform essential for unfinished learning needs and whole-person development. Building upon five key elements identified in the STEM education literature: Problem-based Learning, Community and Belonging, Technology Skills, Personalization of Learning, Connection to the External Community, this case study explores the potential of pedagogical innovation that integrates computational and experimental methodologies to support, enrich, and navigate STEM education. Objectives: The goal of this case study is to create a high-fidelity prototype design for STEM education with knowledge transfer technology that contains a Cooperative Multi-Agent System (CMAS), which has the objectives of (1) conduct assessment to reveal a virtual learning mechanism and establish strategies to facilitate scientific learning engagement, accessibility, and connection within and beyond university setting, (2) explore and validate an interactional co-creation approach embedded in project-based learning activities under the STEM learning context, which is being transformed by both digital technology and student behavior change,(3) formulate and implement the STEM-oriented campaign to guide learning network mapping, mitigate the loss of learning, enhance the learning experience, scale-up inclusive participation. Methods: This study applied a case study strategy and a methodology informed by Social Network Analysis Theory within a cross-disciplinary communication paradigm (students, peers, educators). Knowledge transfer technology is introduced to address learning challenges and to increase the efficiency of Reinforcement Learning (RL) algorithms. A co-creation learning framework was identified and investigated in a context-specific way with a learning analytic tool designed in this study. Findings: The result shows that (1) CMAS-empowered learning support reduced students’ confusion, difficulties, and gaps during problem-solving scenarios while increasing learner capacity empowerment, (2) The co-creation learning phenomenon have examined through the lens of the campaign and reveals that an interactive virtual learning environment fosters students to navigate scientific challenge independently and collaboratively, (3) The deliverables brought from the STEM educational campaign provide a methodological framework both within the context of the curriculum design and external community engagement application. Conclusion: This study brings a holistic and coherent pedagogy to cultivates students’ interest in STEM and develop them a knowledge base to integrate and apply knowledge across different STEM disciplines. Through the co-designing and cross-disciplinary educational content and campaign promotion, findings suggest factors to empower evidence-based learning practice while also piloting and tracking the impact of the scholastic value of co-creation under the dynamic learning environment. The data nested under the knowledge transfer technology situates learners’ scientific journey and could pave the way for theoretical advancement and broader scientific enervators within larger datasets, projects, and communities.Keywords: co-creation, cross-disciplinary, knowledge transfer, STEM education, social network analysis
Procedia PDF Downloads 11411 The Impact of Neighborhood Effects on the Economic Mobility of the Inhabitants of Three Segregated Communities in Salvador (Brazil)
Authors: Stephan Treuke
Abstract:
The paper analyses the neighbourhood effects on the economic mobility of the inhabitants of three segregated communities of Salvador (Brazil), in other words, the socio-economic advantages and disadvantages affecting the lives of poor people due to their embeddedness in specific socio-residential contexts. Recent studies performed in Brazilian metropolis have concentrated on the structural dimensions of negative externalities in order to explain neighbourhood-level variations in a field of different phenomena (delinquency, violence, access to the labour market and education) in spatial isolated and socially homogeneous slum areas (favelas). However, major disagreement remains whether the contiguity between residents of poor neighbourhoods and higher-class condominio-dwellers provides structures of opportunities or whether it fosters socio-spatial stigmatization. Based on a set of interviews, investigating the variability of interpersonal networks and their activation in the struggle for economic inclusion, the study confirms that the proximity of Nordeste de Amaralina to middle-/upper-class communities affects positively the access to labour opportunities. Nevertheless, residential stigmatization, as well as structures of social segmentation, annihilate these potentials. The lack of exposition to individuals and groups extrapolating from the favela’s social, educational and cultural context restricts the structures of opportunities to local level. Therefore, residents´ interpersonal networks reveal a high degree of redundancy and localism, based on bonding ties connecting family and neighbourhood members. The resilience of segregational structures in Plataforma contributes to the naturalization of social distance patters. It’s embeddedness in a socially homogeneous residential area (Subúrbio Ferroviário), growing informally and beyond official urban politics, encourages the construction of isotopic patterns of sociability, sharing the same values, social preferences, perspectives and behaviour models. Whereas it’s spatial isolation correlates with the scarcity of economic opportunities, the social heterogeneity of Fazenda Grande II interviewees and the socialising effects of public institutions mitigate the negative repercussions of segregation. The networks’ composition admits a higher degree of heterophilia and a greater proportion of bridging ties accounting for the access to broader information actives and facilitating economic mobility. The variability observed within the three different scenarios urges to reflect about the responsability of urban politics when it comes to the prevention or consolidation of the social segregation process in Salvador. Instead of promoting the local development of the favela Plataforma, public housing programs priorize technocratic habitational solutions without providing the residents’ socio-economic integration. The impact of negative externalities related to the homogeneously poor neighbourhood is potencialized in peripheral areas, turning its’ inhabitants socially invisible, thus being isolated from other social groups. The example of Nordeste de Amaralina portrays the failing interest of urban politics to bridge the social distances structuring the brazilian society’s rigid stratification model, founded on mecanisms of segmentation (unequal access to labour market and education system, public transport, social security and law protection) and generating permanent conflicts between the two socioeconomically distant groups living in geographic contiguity. Finally, in the case of Fazenda Grande II, the public investments in both housing projects and complementary infrastructure (e.g. schools, hospitals, community center, police stations, recreation areas) contributes to the residents’ socio-economic inclusion.Keywords: economic mobility, neighborhood effects, Salvador, segregation
Procedia PDF Downloads 27910 Examining Language as a Crucial Factor in Determining Academic Performance: A Case of Business Education in Hong Kong
Authors: Chau So Ling
Abstract:
I.INTRODUCTION: Educators have always been interested in exploring factors that contribute to students’ academic success. It is beyond question that language, as a medium of instruction, will affect student learning. This paper tries to investigate whether language is a crucial factor in determining students’ achievement in their studies. II. BACKGROUND AND SIGNIFICANCE OF STUDY: The issue of using English as a medium of instruction in Hong Kong is a special topic because Hong Kong is a post-colonial and international city which a British colony. In such a specific language environment, researchers in the education field have always been interested in investigating students’ language proficiency and its relation to academic achievement and other related educational indicators such as motivation to learn, self-esteem, learning effectiveness, self-efficacy, etc. Along this line of thought, this study specifically focused on business education. III. METHODOLOGY: The methodology in this study involved two sequential stages, namely, a focus group interview and a data analysis. The whole study was directed towards both qualitative and quantitative aspects. The subjects of the study were divided into two groups. For the first group participating in the interview, a total of ten high school students were invited. They studied Business Studies, and their English standard was varied. The theme of the discussion was “Does English affect your learning and examination results of Business Studies?” The students were facilitated to discuss the extent to which English standard affected their learning of Business subjects and requested to rate the correlation between English and performance of Business Studies on a five-point scale. The second stage of the study involved another group of students. They were high school graduates who had taken the public examination for entering universities. A database containing their public examination results for different subjects has been obtained for the purpose of statistical analysis. Hypotheses were tested and evidence was obtained from the focus group interview to triangulate the findings. V. MAJOR FINDINGS AND CONCLUSION: By sharing of personal experience, the discussion of focus group interviews indicated that higher English standards could help the students achieve better learning and examination performance. In order to end the interview, the students were asked to indicate the correlation between English proficiency and performance of Business Studies on a five-point scale. With point one meant least correlated, ninety percent of the students gave point four for the correlation. The preliminary results illustrated that English plays an important role in students’ learning of Business Studies, or at least this was what the students perceived, which set the hypotheses for the study. After conducting the focus group interview, further evidence had to be gathered to support the hypotheses. The data analysis part tried to find out the relationship by correlating the students’ public examination results of Business Studies and levels of English standard. The results indicated a positive correlation between their English standard and Business Studies examination performance. In order to highlight the importance of the English language to the study of Business Studies, the correlation between the public examination results of other non-business subjects was also tested. Statistical results showed that language does play a role in affecting students’ performance in studying Business subjects than the other subjects. The explanation includes the dynamic subject nature, examination format and study requirements, the specialist language used, etc. Unlike Science and Geography, students in their learning process might find it more difficult to relate business concepts or terminologies to their own experience, and there are not many obvious physical or practical activities or visual aids to serve as evidence or experiments. It is well-researched in Hong Kong that English proficiency is a determinant of academic success. Other research studies verified such a notion. For example, research revealed that the more enriched the language experience, the better the cognitive performance in conceptual tasks. The ability to perform this kind of task is particularly important to students taking Business subjects. Another research was carried out in the UK, which was geared towards identifying and analyzing the reasons for underachievement across a cohort of GCSE students taking Business Studies. Results showed that weak language ability was the main barrier to raising students’ performance levels. It seemed that the interview result was successfully triangulated with data findings. Although education failure cannot be restricted to linguistic failure and language is just one of the variables to play in determining academic achievement, it is generally accepted that language does affect students’ academic performance. It is just a matter of extent. This paper provides recommendations for business educators on students’ language training and sheds light on more research possibilities in this area.Keywords: academic performance, language, learning, medium of instruction
Procedia PDF Downloads 1219 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion
Authors: Ali Kazemi
Abstract:
Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting
Procedia PDF Downloads 658 Evaluation of Academic Research Projects Using the AHP and TOPSIS Methods
Authors: Murat Arıbaş, Uğur Özcan
Abstract:
Due to the increasing number of universities and academics, the fund of the universities for research activities and grants/supports given by government institutions have increased number and quality of academic research projects. Although every academic research project has a specific purpose and importance, limited resources (money, time, manpower etc.) require choosing the best ones from all (Amiri, 2010). It is a pretty hard process to compare and determine which project is better such that the projects serve different purposes. In addition, the evaluation process has become complicated since there are more than one evaluator and multiple criteria for the evaluation (Dodangeh, Mojahed and Yusuff, 2009). Mehrez and Sinuany-Stern (1983) determined project selection problem as a Multi Criteria Decision Making (MCDM) problem. If a decision problem involves multiple criteria and objectives, it is called as a Multi Attribute Decision Making problem (Ömürbek & Kınay, 2013). There are many MCDM methods in the literature for the solution of such problems. These methods are AHP (Analytic Hierarchy Process), ANP (Analytic Network Process), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation), UTADIS (Utilities Additives Discriminantes), ELECTRE (Elimination et Choix Traduisant la Realite), MAUT (Multiattribute Utility Theory), GRA (Grey Relational Analysis) etc. Teach method has some advantages compared with others (Ömürbek, Blacksmith & Akalın, 2013). Hence, to decide which MCDM method will be used for solution of the problem, factors like the nature of the problem, types of choices, measurement scales, type of uncertainty, dependency among the attributes, expectations of decision maker, and quantity and quality of the data should be considered (Tavana & Hatami-Marbini, 2011). By this study, it is aimed to develop a systematic decision process for the grant support applications that are expected to be evaluated according to their scientific adequacy by multiple evaluators under certain criteria. In this context, project evaluation process applied by The Scientific and Technological Research Council of Turkey (TÜBİTAK) the leading institutions in our country, was investigated. Firstly in the study, criteria that will be used on the project evaluation were decided. The main criteria were selected among TÜBİTAK evaluation criteria. These criteria were originality of project, methodology, project management/team and research opportunities and extensive impact of project. Moreover, for each main criteria, 2-4 sub criteria were defined, hence it was decided to evaluate projects over 13 sub-criterion in total. Due to superiority of determination criteria weights AHP method and provided opportunity ranking great number of alternatives TOPSIS method, they are used together. AHP method, developed by Saaty (1977), is based on selection by pairwise comparisons. Because of its simple structure and being easy to understand, AHP is the very popular method in the literature for determining criteria weights in MCDM problems. Besides, the TOPSIS method developed by Hwang and Yoon (1981) as a MCDM technique is an alternative to ELECTRE method and it is used in many areas. In the method, distance from each decision point to ideal and to negative ideal solution point was calculated by using Euclidian Distance Approach. In the study, main criteria and sub-criteria were compared on their own merits by using questionnaires that were developed based on an importance scale by four relative groups of people (i.e. TUBITAK specialists, TUBITAK managers, academics and individuals from business world ) After these pairwise comparisons, weight of the each main criteria and sub-criteria were calculated by using AHP method. Then these calculated criteria’ weights used as an input in TOPSİS method, a sample consisting 200 projects were ranked on their own merits. This new system supported to opportunity to get views of the people that take part of project process including preparation, evaluation and implementation on the evaluation of academic research projects. Moreover, instead of using four main criteria in equal weight to evaluate projects, by using weighted 13 sub-criteria and decision point’s distance from the ideal solution, systematic decision making process was developed. By this evaluation process, new approach was created to determine importance of academic research projects.Keywords: Academic projects, Ahp method, Research projects evaluation, Topsis method.
Procedia PDF Downloads 5897 Modern Day Second Generation Military Filipino Amerasians and Ghosts of the U.S. Military Prostitution System in West Central Luzon's 'AMO Amerasian Triangle'
Authors: P. C. Kutschera, Elena C. Tesoro, Mary Grace Talamera-Sandico, Jose Maria G. Pelayo III
Abstract:
Second generation military Filipino Amerasians comprise a formidable contemporary segment of the estimated 250,000-plus biracial Amerasians in the Philippines today. Overall, they are a stigmatized and socioeconomically marginalized diaspora, historically; they were abandoned or estranged by U.S. military personnel fathers assigned during the century-long Colonial, Post-World War II and Cold War Era of permanent military basing (1898-1992). Indeed, U.S. military personnel remain stationed in smaller numbers in the Philippines today. This inquiry is an outgrowth of two recent small sample studies. The first surfaced the impact of the U.S. military prostitution system on formation of the ‘Derivative Amerasian Family Construct’ on first generation Amerasians; a second, qualitative case study suggested the continued effect of the prostitution systems' destructive impetuous on second generation Amerasians. The intent of this current qualitative, multiple-case study was to actively seek out second generation sex industry toilers. The purpose was to focus further on this human phenomenon in the post-basing and post-military prostitution system eras. As background, the former military prostitution apparatus has transformed into a modern dynamic of rampant sex tourism and prostitution nationwide. This is characterized by hotel and resorts offering unrestricted carnal access, urban and provincial brothels (casas), discos, bars and pickup clubs, massage parlors, local barrio karaoke bars and street prostitution. A small case study sample (N = 4) of female and male second generation Amerasians were selected. Sample formation employed a non-probability ‘snowball’ technique drawing respondents from the notorious Angeles, Metro Manila, Olongapo City ‘AMO Amerasian Triangle’ where most former U.S. military installations were sited and modern sex tourism thrives. A six-month study and analysis of in-depth interviews of female and male sex laborers, their families and peers revealed a litany of disturbing, and troublesome experiences. Results showed profiles of debilitating human poverty, history of family disorganization, stigmatization, social marginalization and the ghost of the military prostitution system and its harmful legacy on Amerasian family units. Emerging were testimonials of wayward young people ensnared in a maelstrom of deep economic deprivation, familial dysfunction, psychological desperation and societal indifference. The paper recommends that more study is needed and implications of unstudied psychosocial and socioeconomic experiences of distressed younger generations of military Amerasians require specific research. Heretofore apathetic or disengaged U.S. institutions need to confront the issue and formulate activist and solution-oriented social welfare, human services and immigration easement policies and alternatives. These institutions specifically include academic and social science research agencies, corporate foundations, the U.S. Congress, and Departments of State, Defense and Health and Human Services, and Homeland Security (i.e. Citizen and Immigration Services) It is them who continue to endorse a laissez-faire policy of non-involvement over the entire Filipino Amerasian question. Such apathy, the paper concludes, relegates this consequential but neglected blood progeny to the status of humiliating destitution and exploitation. Amerasians; thus, remain entrapped in their former colonial, and neo-colonial habitat. Ironically, they are unwitting victims of a U.S. American homeland that fancies itself geo-politically as a strong and strategic military treaty ally of the Philippines in the Western Pacific.Keywords: Asian Americans, diaspora, Filipino Amerasians, military prostitution, stigmatization
Procedia PDF Downloads 487