Search results for: mutex task generation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5385

Search results for: mutex task generation

375 Comparing Radiographic Detection of Simulated Syndesmosis Instability Using Standard 2D Fluoroscopy Versus 3D Cone-Beam Computed Tomography

Authors: Diane Ghanem, Arjun Gupta, Rohan Vijayan, Ali Uneri, Babar Shafiq

Abstract:

Introduction: Ankle sprains and fractures often result in syndesmosis injuries. Unstable syndesmotic injuries result from relative motion between the distal ends of the tibia and fibula, anatomic juncture which should otherwise be rigid, and warrant operative management. Clinical and radiological evaluations of intraoperative syndesmosis stability remain a challenging task as traditional 2D fluoroscopy is limited to a uniplanar translational displacement. The purpose of this pilot cadaveric study is to compare the 2D fluoroscopy and 3D cone beam computed tomography (CBCT) stress-induced syndesmosis displacements. Methods: Three fresh-frozen lower legs underwent 2D fluoroscopy and 3D CIOS CBCT to measure syndesmosis position before dissection. Syndesmotic injury was simulated by resecting the (1) anterior inferior tibiofibular ligament (AITFL), the (2) posterior inferior tibiofibular ligament (PITFL) and the inferior transverse ligament (ITL) simultaneously, followed by the (3) interosseous membrane (IOM). Manual external rotation and Cotton stress test were performed after each of the three resections and 2D and 3D images were acquired. Relevant 2D and 3D parameters included the tibiofibular overlap (TFO), tibiofibular clear space (TCS), relative rotation of the fibula, and anterior-posterior (AP) and medial-lateral (ML) translations of the fibula relative to the tibia. Parameters were measured by two independent observers. Inter-rater reliability was assessed by intraclass correlation coefficient (ICC) to determine measurement precision. Results: Significant mismatches were found in the trends between the 2D and 3D measurements when assessing for TFO, TCS and AP translation across the different resection states. Using 3D CBCT, TFO was inversely proportional to the number of resected ligaments while TCS was directly proportional to the latter across all cadavers and ‘resection + stress’ states. Using 2D fluoroscopy, this trend was not respected under the Cotton stress test. 3D AP translation did not show a reliable trend whereas 2D AP translation of the fibula was positive under the Cotton stress test and negative under the external rotation. 3D relative rotation of the fibula, assessed using the Tang et al. ratio method and Beisemann et al. angular method, suggested slight overall internal rotation with complete resection of the ligaments, with a change < 2mm - threshold which corresponds to the commonly used buffer to account for physiologic laxity as per clinical judgment of the surgeon. Excellent agreement (>0.90) was found between the two independent observers for each of the parameters in both 2D and 3D (overall ICC 0.9968, 95% CI 0.995 - 0.999). Conclusions: The 3D CIOS CBCT appears to reliably depict the trend in TFO and TCS. This might be due to the additional detection of relevant rotational malpositions of the fibula in comparison to the standard 2D fluoroscopy which is limited to a single plane translation. A better understanding of 3D imaging may help surgeons identify the precise measurements planes needed to achieve better syndesmosis repair.

Keywords: 2D fluoroscopy, 3D computed tomography, image processing, syndesmosis injury

Procedia PDF Downloads 70
374 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model

Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero

Abstract:

Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.

Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods

Procedia PDF Downloads 21
373 Gravitational Water Vortex Power Plant: Experimental-Parametric Design of a Hydraulic Structure Capable of Inducing the Artificial Formation of a Gravitational Water Vortex Appropriate for Hydroelectric Generation

Authors: Henrry Vicente Rojas Asuero, Holger Manuel Benavides Muñoz

Abstract:

Approximately 80% of the energy consumed worldwide is generated from fossil sources, which are responsible for the emission of a large volume of greenhouse gases. For this reason, the global trend, at present, is the widespread use of energy produced from renewable sources. This seeks safety and diversification of energy supply, based on social cohesion, economic feasibility and environmental protection. In this scenario, small hydropower systems (P ≤ 10MW) stand out due to their high efficiency, economic competitiveness and low environmental impact. Small hydropower systems, along with wind and solar energy, are expected to represent a significant percentage of the world's energy matrix in the near term. Among the various technologies present in the state of the art, relating to small hydropower systems, is the Gravitational Water Vortex Power Plant, a recent technology that excels because of its versatility of operation, since it can operate with jumps in the range of 0.70 m-2.00 m and flow rates from 1 m3/s to 20 m3/s. Its operating system is based on the utilization of the energy of rotation contained within a large water vortex artificially induced. This paper presents the study and experimental design of an optimal hydraulic structure with the capacity to induce the artificial formation of a gravitational water vortex trough a system of easy application and high efficiency, able to operate in conditions of very low head and minimum flow. The proposed structure consists of a channel, with variable base, vortex inductor, tangential flow generator, coupled to a circular tank with a conical transition bottom hole. In the laboratory test, the angular velocity of the water vortex was related to the geometric characteristics of the inductor channel, as well as the influence of the conical transition bottom hole on the physical characteristics of the water vortex. The results show angular velocity values of greater magnitude as a function of depth, in addition the presence of the conical transition in the bottom hole of the circular tank improves the water vortex formation conditions while increasing the angular velocity values. Thus, the proposed system is a sustainable solution for the energy supply of rural areas near to watercourses.

Keywords: experimental model, gravitational water vortex power plant, renewable energy, small hydropower

Procedia PDF Downloads 288
372 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.

Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design

Procedia PDF Downloads 235
371 Comparison between Conventional Bacterial and Algal-Bacterial Aerobic Granular Sludge Systems in the Treatment of Saline Wastewater

Authors: Philip Semaha, Zhongfang Lei, Ziwen Zhao, Sen Liu, Zhenya Zhang, Kazuya Shimizu

Abstract:

The increasing generation of saline wastewater through various industrial activities is becoming a global concern for activated sludge (AS) based biological treatment which is widely applied in wastewater treatment plants (WWTPs). As for the AS process, an increase in wastewater salinity has negative impact on its overall performance. The advent of conventional aerobic granular sludge (AGS) or bacterial AGS biotechnology has gained much attention because of its superior performance. The development of algal-bacterial AGS could enhance better nutrients removal, potentially reduce aeration cost through symbiotic algae-bacterial activity, and thus, can also reduce overall treatment cost. Nonetheless, the potential of salt stress to decrease biomass growth, microbial activity and nutrient removal exist. Up to the present, little information is available on saline wastewater treatment by algal-bacterial AGS. To the authors’ best knowledge, a comparison of the two AGS systems has not been done to evaluate nutrients removal capacity in the context of salinity increase. This study sought to figure out the impact of salinity on the algal-bacterial AGS system in comparison to bacterial AGS one, contributing to the application of AGS technology in the real world of saline wastewater treatment. In this study, the salt concentrations tested were 0 g/L, 1 g/L, 5 g/L, 10 g/L and 15 g/L of NaCl with 24-hr artificial illuminance of approximately 97.2 µmol m¯²s¯¹, and mature bacterial and algal-bacterial AGS were used for the operation of two identical sequencing batch reactors (SBRs) with a working volume of 0.9 L each, respectively. The results showed that salinity increase caused no apparent change in the color of bacterial AGS; while for algal-bacterial AGS, its color was progressively changed from green to dark green. A consequent increase in granule diameter and fluffiness was observed in the bacterial AGS reactor with the increase of salinity in comparison to a decrease in algal-bacterial AGS diameter. However, nitrite accumulation peaked from 1.0 mg/L and 0.4 mg/L at 1 g/L NaCl in the bacterial and algal-bacterial AGS systems, respectively to 9.8 mg/L in both systems when NaCl concentration varied from 5 g/L to 15 g/L. Almost no ammonia nitrogen was detected in the effluent except at 10 g/L NaCl concentration, where it averaged 4.2 mg/L and 2.4 mg/L, respectively, in the bacterial and algal-bacterial AGS systems. Nutrients removal in the algal-bacterial system was relatively higher than the bacterial AGS in terms of nitrogen and phosphorus removals. Nonetheless, the nutrient removal rate was almost 50% or lower. Results show that algal-bacterial AGS is more adaptable to salinity increase and could be more suitable for saline wastewater treatment. Optimization of operation conditions for algal-bacterial AGS system would be important to ensure its stably high efficiency in practice.

Keywords: algal-bacterial aerobic granular sludge, bacterial aerobic granular sludge, Nutrients removal, saline wastewater, sequencing batch reactor

Procedia PDF Downloads 145
370 Improved Approach to the Treatment of Resistant Breast Cancer

Authors: Lola T. Alimkhodjaeva, Lola T. Zakirova, Soniya S. Ziyavidenova

Abstract:

Background: Breast cancer (BC) is still one of the urgent oncology problems. The essential obstacle to the full anti-tumor therapy implementation is drug resistance development. Taking into account the fact that chemotherapy is main antitumor treatment in BC patients, the important task is to improve treatment results. Certain success in overcoming this situation has been associated with the use of methods of extracorporeal blood treatment (ECBT), plasmapheresis. Materials and Methods: We examined 129 women with resistant BC stages 3-4, aged between 56 to 62 years who had previously received 2 courses of CAF chemotherapy. All patients additionally underwent 2 courses of CAF chemotherapy but against the background ECBT with ultrasonic exposure. We studied the following parameters: 1. The highlights of peripheral blood before and after therapy. 2. The state of cellular immunity and identification of activation markers CD23 +, CD25 +, CD38 +, CD95 + on lymphocytes was performed using monoclonal antibodies. Evaluation of humoral immunity was determined by the level of main classes of immunoglobulins IgG, IgA, IgM in serum. 3. The degree of tumor regression was assessed by WHO recommended 4 gradations. (complete - 100%, partial - more than 50% of initial size, process stabilization–regression is less than 50% of initial size and tumor advance progressing). 4. Medical pathomorphism in the tumor was determined by Lavnikova. 5. The study of immediate and remote results, up to 3 years and more. Results and Discussion: After performing extracorporeal blood treatment anemia occurred in 38.9%, leukopenia in 36.8%, thrombocytopenia in 34.6%, hypolymphemia in 26.8%. Studies of immunoglobulin fractions in blood serum were able to establish a certain relationship between the classes of immunoglobulin A, G, M and their functions. The results showed that after treatment the values of main immunoglobulins in patients’ serum approximated to normal. Analysis of expression of activation markers CD25 + cells bearing receptors for IL-2 (IL-2Rα chain) and CD95 + lymphocytes that were mediated physiological apoptosis showed the tendency to increase, which apparently was due to activation of cellular immunity cytokines allocated by ultrasonic treatment. To carry out ECBT on the background of ultrasonic treatment improved the parameters of the immune system, which were expressed in stimulation of cellular immunity and correcting imbalances in humoral immunity. The key indicator of conducted treatment efficiency is the immediate result measured by the degree of tumor regression. After ECBT performance the complete regression was 10.3%, partial response - 55.5%, process stabilization - 34.5%, tumor advance progressing no observed. Morphological investigations of tumor determined therapeutic pathomorphism grade 2 in 15%, in 25% - grade 3 and therapeutic pathomorphism grade 4 in 60% of patients. One of the main criteria for the effect of conducted treatment is to study the remission terms in the postoperative period (up to 3 years or more). The remission terms up to 3 years with ECBT was 34.5%, 5-year survival was 54%. Carried out research suggests that a comprehensive study of immunological and clinical course of breast cancer allows the differentiated approach to the choice of methods for effective treatment.

Keywords: breast cancer, immunoglobulins, extracorporeal blood treatment, chemotherapy

Procedia PDF Downloads 272
369 Solar Power Generation in a Mining Town: A Case Study for Australia

Authors: Ryan Chalk, G. M. Shafiullah

Abstract:

Climate change is a pertinent issue facing governments and societies around the world. The industrial revolution has resulted in a steady increase in the average global temperature. The mining and energy production industries have been significant contributors to this change prompting government to intervene by promoting low emission technology within these sectors. This paper initially reviews the energy problem in Australia and the mining sector with a focus on the energy requirements and production methods utilised in Western Australia (WA). Renewable energy in the form of utility-scale solar photovoltaics (PV) provides a solution to these problems by providing emission-free energy which can be used to supplement the existing natural gas turbines in operation at the proposed site. This research presents a custom renewable solution for the mining site considering the specific township network, local weather conditions, and seasonal load profiles. A summary of the required PV output is presented to supply slightly over 50% of the towns power requirements during the peak (summer) period, resulting in close to full coverage in the trench (winter) period. Dig Silent Power Factory Software has been used to simulate the characteristics of the existing infrastructure and produces results of integrating PV. Large scale PV penetration in the network introduce technical challenges, that includes; voltage deviation, increased harmonic distortion, increased available fault current and power factor. Results also show that cloud cover has a dramatic and unpredictable effect on the output of a PV system. The preliminary analyses conclude that mitigation strategies are needed to overcome voltage deviations, unacceptable levels of harmonics, excessive fault current and low power factor. Mitigation strategies are proposed to control these issues predominantly through the use of high quality, made for purpose inverters. Results show that use of inverters with harmonic filtering reduces the level of harmonic injections to an acceptable level according to Australian standards. Furthermore, the configuration of inverters to supply active and reactive power assist in mitigating low power factor problems. Use of FACTS devices; SVC and STATCOM also reduces the harmonics and improve the power factor of the network, and finally, energy storage helps to smooth the power supply.

Keywords: climate change, mitigation strategies, photovoltaic (PV), power quality

Procedia PDF Downloads 164
368 Toxicological Analysis of Some Plant Combinations Used for the Treatment of Hypertension by Lay People in Northern Kwazulu-Natal, South Africa

Authors: Mmbulaheni Ramulondi, Sandy Van Vuuren, Helene De Wet

Abstract:

The use of plant combinations to treat various medical conditions is not a new concept, and it is known that traditional people do not only rely on a single plant extract for efficacy but often combine various plant species for treatment. The knowledge of plant combinations is transferred from one generation to the other in the belief that combination therapy may enhance efficacy, reduce toxicity, decreases adverse effects, increase bioavailability and result in lower dosages. However, combination therapy may also be harmful when the interaction is antagonistic, since it may result in increasing toxicity. Although a fair amount of research has been done on the toxicity of medicinal plants, there is very little done on the toxicity of medicinal plants in combination. The aim of the study was to assess the toxicity potential of 19 plant combinations which have been documented as treatments of hypertension in northern KwaZulu-Natal by lay people. The aqueous extracts were assessed using two assays; the Brine shrimp assay (Artemia franciscana) and the Ames test (Mutagenicity). Only one plant combination (Aloe marlothii with Hypoxis hemerocallidea) in the current study has been previously assessed for toxicity. With the Brine shrimp assay, the plant combinations were tested in two concentrations (2 and 4 mg/ml), while for mutagenicity tests, they were tested at 5 mg/ml. The results showed that in the Brine shrimp assay, six combinations were toxic at 4 mg/ml. The combinations were Albertisia delagoensis with Senecio serratuloides (57%), Aloe marlothii with Catharanthus roseus (98%), Catharanthus roseus with Hypoxis hemerocallidea (66%), Catharanthus roseus with Musa acuminata (89%), Catharanthus roseus with Momordica balsamina (99%) and Aloe marlothii with Trichilia emetica and Hyphaene coriacea (50%). However when the concentration was reduced to 2 mg/ml, only three combinations were toxic which were Aloe marlothii with Catharanthus roseus (76%), Catharanthus roseus with Musa acuminata (66%) and Catharanthus roseus with Momordica balsamina (73%). For the mutagenicity assay, only the combinations between Catharanthus roseus with Hypoxis hemerocallidea and Catharanthus roseus with Momordica balsamina were mutagenic towards the Salmonella typhimurium strains TA98 and TA100. Most of the combinations which were toxic involve C. roseus which was also toxic when tested singularly. It is worth noting that C. roseus was one of the most frequently used plant species both to treat hypertension singularly and in combination and some of the individuals have been using this for the last 20 years. The mortality percentage of the Brine shrimp showed a significant correlation between dosage and toxicity thus toxicity was dosage dependant. A combination which is worth noting is the combination between A. delagoensis and S. serratuloides. Singularly these plants were non-toxic towards Brine shrimp, however their combination resulted in antagonism with the mortality rate of 57% at the total concentration of 4 mg/ml. Low toxicity was mostly observed, giving some validity to combined use, however the few combinations showing increased toxicity demonstrate the importance of analysing plant combinations.

Keywords: dosage, hypertension, plant combinations, toxicity

Procedia PDF Downloads 352
367 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste

Authors: Rajeev Ravindran, Amit K. Jaiswal

Abstract:

Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.

Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound

Procedia PDF Downloads 364
366 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 136
365 Feasibility of Applying a Hydrodynamic Cavitation Generator as a Method for Intensification of Methane Fermentation Process of Virginia Fanpetals (Sida hermaphrodita) Biomass

Authors: Marcin Zieliński, Marcin Dębowski, Mirosław Krzemieniewski

Abstract:

The anaerobic degradation of substrates is limited especially by the rate and effectiveness of the first (hydrolytic) stage of fermentation. This stage may be intensified through pre-treatment of substrate aimed at disintegration of the solid phase and destruction of substrate tissues and cells. The most frequently applied criterion of disintegration outcomes evaluation is the increase in biogas recovery owing to the possibility of its use for energetic purposes and, simultaneously, recovery of input energy consumed for the pre-treatment of substrate before fermentation. Hydrodynamic cavitation is one of the methods for organic substrate disintegration that has a high implementation potential. Cavitation is explained as the phenomenon of the formation of discontinuity cavities filled with vapor or gas in a liquid induced by pressure drop to the critical value. It is induced by a varying field of pressures. A void needs to occur in the flow in which the pressure first drops to the value close to the pressure of saturated vapor and then increases. The process of cavitation conducted under controlled conditions was found to significantly improve the effectiveness of anaerobic conversion of organic substrates having various characteristics. This phenomenon allows effective damage and disintegration of cellular and tissue structures. Disintegration of structures and release of organic compounds to the dissolved phase has a direct effect on the intensification of biogas production in the process of anaerobic fermentation, on reduced dry matter content in the post-fermentation sludge as well as a high degree of its hygienization and its increased susceptibility to dehydration. A device the efficiency of which was confirmed both in laboratory conditions and in systems operating in the technical scale is a hydrodynamic generator of cavitation. Cavitators, agitators and emulsifiers constructed and tested worldwide so far have been characterized by low efficiency and high energy demand. Many of them proved effective under laboratory conditions but failed under industrial ones. The only task successfully realized by these appliances and utilized on a wider scale is the heating of liquids. For this reason, their usability was limited to the function of heating installations. Design of the presented cavitation generator allows achieving satisfactory energy efficiency and enables its use under industrial conditions in depolymerization processes of biomass with various characteristics. Investigations conducted on the laboratory and industrial scale confirmed the effectiveness of applying cavitation in the process of biomass destruction. The use of the cavitation generator in laboratory studies for disintegration of sewage sludge allowed increasing biogas production by ca. 30% and shortening the treatment process by ca. 20 - 25%. The shortening of the technological process and increase of wastewater treatment plant effectiveness may delay investments aimed at increasing system output. The use of a mechanical cavitator and application of repeated cavitation process (4-6 times) enables significant acceleration of the biogassing process. In addition, mechanical cavitation accelerates increases in COD and VFA levels.

Keywords: hydrodynamic cavitation, pretreatment, biomass, methane fermentation, Virginia fanpetals

Procedia PDF Downloads 432
364 Investigation of Attitude of Production Workers towards Job Rotation in Automotive Industry against the Background of Demographic Change

Authors: Franciska Weise, Ralph Bruder

Abstract:

Due to the demographic change in Germany along with the declining birth rate and the increasing age of population, the share of older people in society is rising. This development is also reflected in the work force of German companies. Therefore companies should focus on improving ergonomics, especially in the area of age-related work design. Literature shows that studies on age-related work design have been carried out in the past, some of whose results have been put into practice. However, there is still a need for further research. One of the most important methods for taking into account the needs of an aging population is job rotation. This method aims at preventing or reducing health risks and inappropriate physical strain. It is conceived as a systematic change of workplaces within a group. Existing literature does not cover any methods for the investigation of the attitudes of employees towards job rotation. However, in order to evaluate job rotation, it is essential to have knowledge of the views of people towards rotation. In addition to an investigation of attitudes, the design of rotation plays a crucial role. The sequence of activities and the rotation frequency influence the worker and as well the work result. The evaluation of preliminary talks on the shop floor showed that team speakers and foremen share a common understanding of job rotation. In practice, different varieties of job rotation exist. One important aspect is the frequency of rotation. It is possible to rotate never, more than one time or even during every break, or more often than every break. It depends on the opportunity or possibility to rotate whenever workers want to rotate. From the preliminary talks some challenges can be derived. For example a rotation in the whole team is not possible, if a team member requires to be trained for a new task. In order to be able to determine the relation of the design and the attitude towards job rotation, a questionnaire is carried out in the vehicle manufacturing. The questionnaire will be employed to determine the different varieties of job rotation that exist in production, as well as the attitudes of workers towards those different frequencies of job rotation. In addition, younger and older employees will be compared with regard to their rotation frequency and their attitudes towards rotation. There are three kinds of age groups. Three questions are under examination. The first question is whether older employees rotate less frequently than younger employees. Also it is investigated to know whether the frequency of job rotation and the attitude towards the frequency of job rotation are interconnected. Moreover, the attitudes of the different age groups towards the frequency of rotation will be examined. Up to now 144 employees, all working in production, took part in the survey. 36.8 % were younger than thirty, 37.5 % were between thirty und forty-four and 25.7 % were above forty-five years old. The data shows no difference between the three age groups in relation to the frequency of job rotation (N=139, median=4, Chi²=.859, df=2, p=.651). Most employees rotate between six and seven workplaces per day. In addition there is a statistically significant correlation between the frequency of job rotation and the attitude towards the frequency (Spearman-Rho: 2-sided=.008, correlation coefficient=.223). Less than four workplaces per day are not enough for the employees. The third question, which differences can be found between older and younger people who rotate in a different way and with different attitudes towards job rotation, cannot be possible answered. Till now the data shows that younger people would like to rotate very often. Regarding to older people no correlation can be found with acceptable significance. The results of the survey will be used to improve the current practice of job rotation. In addition, the discussions during the survey are expected to help sensitize the employees with respect to rotation issues, and to contribute to optimizing rotation by means of qualification and an improved design of job rotation. Together with the employees and the results of the survey there must be found standards which show how to rotate in an ergonomic way while consider the attitude towards job rotation.

Keywords: job rotation, age-related work design, questionnaire, automotive industry

Procedia PDF Downloads 303
363 Ethical Artificial Intelligence: An Exploratory Study of Guidelines

Authors: Ahmad Haidar

Abstract:

The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.

Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI

Procedia PDF Downloads 93
362 Analytical Study of the Structural Response to Near-Field Earthquakes

Authors: Isidro Perez, Maryam Nazari

Abstract:

Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.

Keywords: near-field, pulse, pushover, time-history

Procedia PDF Downloads 146
361 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning

Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar

Abstract:

Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.

Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence

Procedia PDF Downloads 75
360 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc

Abstract:

Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, friction, process

Procedia PDF Downloads 145
359 Immunomodulatory Role of Heat Killed Mycobacterium indicus pranii against Cervical Cancer

Authors: Priyanka Bhowmik, Subrata Majumdar, Debprasad Chattopadhyay

Abstract:

Background: Cervical cancer is the third major cause of cancer in women and the second most frequent cause of cancer related deaths causing 300,000 deaths annually worldwide. Evasion of immune response by Human Papilloma Virus (HPV), the key contributing factor behind cancer and pre-cancerous lesions of the uterine cervix, makes immunotherapy a necessity to treat this disease. Objective: A Heat killed fraction of Mycobacterium indicus pranii (MIP), a non-pathogenic Mycobacterium has been shown to exhibit cytotoxic effects on different cancer cells, including human cervical carcinoma cell line HeLa. However, the underlying mechanisms remain unknown. The aim of this study is to decipher the mechanism of MIP induced HeLa cell death. Methods: The cytotoxicity of Mycobacterium indicus pranii against HeLa cells was evaluated by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. Apoptosis was detected by annexin V and Propidium iodide (PI) staining. The assessment of reactive oxygen species (ROS) generation and cell cycle analysis were measured by flow cytometry. The expression of apoptosis associated genes was analyzed by real time PCR. Result: MIP could inhibit the proliferation of HeLa cell in a time and dose dependent manner but caused minor damage to normal cells. The induction of apoptosis was confirmed by the cell surface presentation of phosphatidyl serine, DNA fragmentation, and mitochondrial damage. MIP caused very early (as early as 30 minutes) transcriptional activation of p53, followed by a higher activation (32 fold) at 24 hours suggesting prime importance of p53 in MIP-induced apoptosis in HeLa cell. The up regulation of p53 dependent pro-apoptotic genes Bax, Bak, PUMA, and Noxa followed a lag phase that was required for the transcriptional p53 program. MIP also caused the transcriptional up regulation of Toll like receptor 2 and 4 after 30 minutes of MIP treatment suggesting recognition of MIP by toll like receptors. Moreover, MIP caused the inhibition of expression of HPV anti apoptotic gene E6, which is known to interfere with p53/PUMA/Bax apoptotic cascade. This inhibition might have played a role in transcriptional up regulation of PUMA and subsequently apoptosis. ROS was generated transiently which was concomitant with the highest transcription activation of p53 suggesting a plausible feedback loop network of p53 and ROS in the apoptosis of HeLa cells. Scavenger of ROS, such as N-acetyl-L-cysteine, decreased apoptosis suggesting ROS is an important effector of MIP induced apoptosis. Conclusion: Taken together, MIP possesses full potential to be a novel therapeutic agent in the clinical treatment of cervical cancer.

Keywords: cancer, mycobacterium, immunity, immunotherapy.

Procedia PDF Downloads 248
358 Evaluating Gender Sensitivity and Policy: Case Study of an EFL Textbook in Armenia

Authors: Ani Kojoyan

Abstract:

Linguistic studies have been investigating a connection between gender and linguistic development since 1970s. Scholars claim that gender differences in first and second language learning are socially constructed. Recent studies to language learning and gender reveal that second language acquisition is also a social phenomenon directly influencing one’s gender identity. Those responsible for designing language learning-teaching materials should be encouraged to understand the importance of and address the gender sensitivity accurately in textbooks. Writing or compiling a textbook is not an easy task; it requires strong academic abilities, patience, and experience. For a long period of time Armenia has been involved in the compilation process of a number of foreign language textbooks. However, there have been very few discussions or evaluations of those textbooks which will allow specialists to theorize that practice. The present paper focuses on the analysis of gender sensitivity issues and policy aspects involved in an EFL textbook. For the research the following material has been considered – “A Basic English Grammar: Morphology”, first printed in 2011. The selection of the material is not accidental. First, the mentioned textbook has been widely used in university teaching over years. Secondly, in Armenia “A Basic English Grammar: Morphology” has considered one of the most successful English grammar textbooks in a university teaching environment and served a source-book for other authors to compile and design their textbooks. The present paper aims to find out whether an EFL textbook is gendered in the Armenian teaching environment, and whether the textbook compilers are aware of gendered messages while compiling educational materials. It also aims at investigating students’ attitude toward the gendered messages in those materials. And finally, it also aims at increasing the gender sensitivity among book compilers and educators in various educational settings. For this study qualitative and quantitative research methods of analyses have been applied, the quantitative – in terms of carrying out surveys among students (45 university students, 18-25 age group), and the qualitative one – by discourse analysis of the material and conducting in-depth and semi-structured interviews with the Armenian compilers of the textbook (interviews with 3 authors). The study is based on passive and active observations and teaching experience done in a university classroom environment in 2014-2015, 2015-2016. The findings suggest that the discussed and analyzed teaching materials (145 extracts and examples) include traditional examples of intensive use of language and role-modelling, particularly, men are mostly portrayed as active, progressive, aggressive, whereas women are often depicted as passive and weak. These modeled often serve as a ‘reliable basis’ for reinforcing the traditional roles that have been projected on female and male students. The survey results also show that such materials contribute directly to shaping learners’ social attitudes and expectations around issues of gender. The applied techniques and discussed issues can be generalized and applied to other foreign language textbook compilation processes, since those principles, regardless of a language, are mostly the same.

Keywords: EFL textbooks, gender policy, gender sensitivity, qualitative and quantitative research methods

Procedia PDF Downloads 194
357 Self-Selected Intensity and Discounting Rates of Exercise in Comparison with Food and Money in Healthy Adults

Authors: Tamam Albelwi, Robert Rogers, Hans-Peter Kubis

Abstract:

Background: Exercise is widely acknowledged as a highly important health behavior, which reduces risks related to lifestyle diseases like type 2 diabetes, cardiovascular disease. However, exercise adherence is low in high-risk groups and sedentary lifestyle is more the norm than the exception. Expressed reasons for exercise participation are often based on delayed outcomes related to health threats and benefits but also enjoyment. Whether exercise is perceived as rewarding is well established in animal literature but the evidence is sparse in humans. Additionally, the question how stable any reward is perceived with time delays is an important question influencing decision-making (in favor or against a behavior). For the modality exercise, this has not been examined before. We, therefore, investigated the discounting of pre-established self-selected exercise compared with established rewards of food and money with a computer-based discounting paradigm. We hypothesized that exercise will be discounted like an established reward (food and money); however, we expect that the discounting rate is similar to a consumable reward like food. Additionally, we expected that individuals’ characteristics like preferred intensity, physical activity and body characteristics are associated with discount rates. Methods: 71 participants took part in four sessions. The sessions were designed to let participants select their preferred exercise intensity on a treadmill. Participants were asked to adjust their speed for optimizing pleasantness over an exercise period of up to 30 minutes, heart rate and pleasantness rating was measured. In further sessions, the established exercise intensity was modified and tested on perceptual validity. In the last exercise session rates of perceived exertion was measured on the preferred intensity level. Furthermore, participants filled in questionnaires related to physical activity, mood, craving, and impulsivity and answered choice questions on a bespoke computer task to establish discounting rates of their preferred exercise (kex), their favorite food (kfood) and a value-matching amount of money (kmoney). Results: Participants self-selected preferred speed was 5.5±2.24 km/h, at a heart rate of 120.7±23.5, and perceived exertion scale of 10.13±2.06. This shows that participants preferred a light exercise intensity with low to moderate cardiovascular strain based on perceived pleasantness. Computer assessment of discounting rates revealed that exercise was quickly discounted like a consumable reward, no significant difference between kfood and kex (kfood =0.322±0.263; kex=0.223±0.203). However, kmoney (kmoney=0.080±0.02) was significantly lower than the rates of exercise and food. Moreover, significant associations were found between preferred speed and kex (r=-0.302) and between physical activity levels and preferred speed (r=0.324). Outcomes show that participants perceived and discounted self-selected exercise like an established reward (food and money) but was discounted more like consumable rewards. Moreover, exercise discounting was quicker in individuals who preferred lower speeds, being less physically active. This may show that in a choice conflict between exercise and food the delay of exercise (because of distance) might disadvantage exercise as the chosen behavior particular in sedentary people. Conclusion: exercise can be perceived as a reward and is discounted quickly in time like food. Pleasant exercise experience is connected to low to moderate cardiovascular and perceptual strain.

Keywords: delay discounting, exercise, temporal discounting, time perspective

Procedia PDF Downloads 269
356 Performance Evaluation of Fingerprint, Auto-Pin and Password-Based Security Systems in Cloud Computing Environment

Authors: Emmanuel Ogala

Abstract:

Cloud computing has been envisioned as the next-generation architecture of Information Technology (IT) enterprise. In contrast to traditional solutions where IT services are under physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the management of the data and services may not be fully trustworthy. This is due to the fact that the systems are opened to the whole world and as people tries to have access into the system, many people also are there trying day-in day-out on having unauthorized access into the system. This research contributes to the improvement of cloud computing security for better operation. The work is motivated by two problems: first, the observed easy access to cloud computing resources and complexity of attacks to vital cloud computing data system NIC requires that dynamic security mechanism evolves to stay capable of preventing illegitimate access. Second; lack of good methodology for performance test and evaluation of biometric security algorithms for securing records in cloud computing environment. The aim of this research was to evaluate the performance of an integrated security system (ISS) for securing exams records in cloud computing environment. In this research, we designed and implemented an ISS consisting of three security mechanisms of biometric (fingerprint), auto-PIN and password into one stream of access control and used for securing examination records in Kogi State University, Anyigba. Conclusively, the system we built has been able to overcome guessing abilities of hackers who guesses people password or pin. We are certain about this because the added security system (fingerprint) needs the presence of the user of the software before a login access can be granted. This is based on the placement of his finger on the fingerprint biometrics scanner for capturing and verification purpose for user’s authenticity confirmation. The study adopted the conceptual of quantitative design. Object oriented and design methodology was adopted. In the analysis and design, PHP, HTML5, CSS, Visual Studio Java Script, and web 2.0 technologies were used to implement the model of ISS for cloud computing environment. Note; PHP, HTML5, CSS were used in conjunction with visual Studio front end engine design tools and MySQL + Access 7.0 were used for the backend engine and Java Script was used for object arrangement and also validation of user input for security check. Finally, the performance of the developed framework was evaluated by comparing with two other existing security systems (Auto-PIN and password) within the school and the results showed that the developed approach (fingerprint) allows overcoming the two main weaknesses of the existing systems and will work perfectly well if fully implemented.

Keywords: performance evaluation, fingerprint, auto-pin, password-based, security systems, cloud computing environment

Procedia PDF Downloads 139
355 The Impression of Adaptive Capacity of the Rural Community in the Indian Himalayan Region: A Way Forward for Sustainable Livelihood Development

Authors: Rommila Chandra, Harshika Choudhary

Abstract:

The value of integrated, participatory, and community based sustainable development strategies is eminent, but in practice, it still remains fragmentary and often leads to short-lived results. Despite the global presence of climate change, its impacts are felt differently by different communities based on their vulnerability. The developing countries have the low adaptive capacity and high dependence on environmental variables, making them highly susceptible to outmigration and poverty. We need to understand how to enable these approaches, taking into account the various governmental and non-governmental stakeholders functioning at different levels, to deliver long-term socio-economic and environmental well-being of local communities. The research assessed the financial and natural vulnerability of Himalayan networks, focusing on their potential to adapt to various changes, through accessing their perceived reactions and local knowledge. The evaluation was conducted by testing indices for vulnerability, with a major focus on indicators for adaptive capacity. Data for the analysis were collected from the villages around Govind National Park and Wildlife Sanctuary, located in the Indian Himalayan Region. The villages were stratified on the basis of connectivity via road, thus giving two kinds of human settlements connected and isolated. The study focused on understanding the complex relationship between outmigration and the socio-cultural sentiments of local people to not abandon their land, assessing their adaptive capacity for livelihood opportunities, and exploring their contribution that integrated participatory methodologies can play in delivering sustainable development. The result showed that the villages having better road connectivity, access to market, and basic amenities like health and education have a better understanding about the climatic shift, natural hazards, and a higher adaptive capacity for income generation in comparison to the isolated settlements in the hills. The participatory approach towards environmental conservation and sustainable use of natural resources were seen more towards the far-flung villages. The study helped to reduce the gap between local understanding and government policies by highlighting the ongoing adaptive practices and suggesting precautionary strategies for the community studied based on their local conditions, which differ on the basis of connectivity and state of development. Adaptive capacity in this study has been taken as the externally driven potential of different parameters, leading to a decrease in outmigration and upliftment of the human environment that could lead to sustainable livelihood development in the rural areas of Himalayas.

Keywords: adaptive capacity, Indian Himalayan region, participatory, sustainable livelihood development

Procedia PDF Downloads 117
354 Anti-Obesity Effects of Pteryxin in Peucedanum japonicum Thunb Leaves through Different Pathways of Adipogenesis In-Vitro

Authors: Ruwani N. Nugara, Masashi Inafuku, Kensaku Takara, Hironori Iwasaki, Hirosuke Oku

Abstract:

Pteryxin from the partially purified hexane phase (HP) of Peucedanum japonicum Thunb (PJT) was identified as the active compound related to anti-obesity. Thus, in this study we investigated the mechanisms related to anti-obesity activity in-vitro. The HP was fractionated, and effect on the triglyceride (TG) content was evaluated in 3T3-L1 and HepG2 cells. Comprehensive spectroscopic analyses were used to identify the structure of the active compound. The dose dependent effect of active constituent on the TG content, and the gene expressions related to adipogenesis, fatty acid catabolism, energy expenditure, lipolysis and lipogenesis (20 μg/mL) were examined in-vitro. Furthermore, higher dosage of pteryxin (50μg/mL) was tested against 20μg/mL in 3T3-L1 adipocytes. The mRNA were subjected to SOLiD next generation sequencer and the obtained data were analyzed by Ingenuity Pathway Analysis (IPA). The active constituent was identified as pteryxin, a known compound in PJT. However, its biological activities against obesity have not been reported previously. Pteryxin dose dependently suppressed TG content in both 3T3-L1 adipocytes and HepG2 hepatocytes (P < 0.05). Sterol regulatory element-binding protein-1 (SREBP1 c), Fatty acid synthase (FASN), and acetyl-CoA carboxylase-1 (ACC1) were downregulated in pteryxin-treated adipocytes (by 18.0, 36.1 and 38.2%; P < 0.05, respectively) and hepatocytes (by 72.3, 62.9 and 38.8%, respectively; P < 0.05) indicating its suppressive effects on fatty acid synthesis. The hormone-sensitive lipase (HSL), a lipid catabolising gene was upregulated (by 15.1%; P < 0.05) in pteryxin-treated adipocytes suggesting improved lipolysis. Concordantly, the adipocyte size marker gene, paternally expressed gene1/mesoderm specific transcript (MEST) was downregulated (by 42.8%; P < 0.05), further accelerating the lipolytic activity. The upregulated trend of uncoupling protein 2 (UCP2; by 77.5%; P < 0.05) reflected the improved energy expenditure due to pteryxin. The 50μg/mL dosage of pteryxin completely suppressed PPARγ, MEST, SREBP 1C, HSL, Adiponectin, Fatty Acid Binding Protein (FABP) 4, and UCP’s in 3T3-L1 adipocytes. The IPA suggested that pteryxin at 20μg/mL and 50μg/mL suppress obesity in two different pathways, whereas the WNT signaling pathway play a key role in the higher dose of pteryxin in preadipocyte stage. Pteryxin in PJT play the key role in regulating lipid metabolism related gene network and improving energy production in vitro. Thus, the results suggests pteryxin as a new natural compound to be used as an anti-obesity drug in pharmaceutical industry.

Keywords: obesity, peucedanum japonicum thunb, pteryxin, food science

Procedia PDF Downloads 453
353 Evaluating the Teaching and Learning Value of Tablets

Authors: Willem J. A. Louw

Abstract:

The wave of new advanced computing technology that has been developed during the recent past has significantly changed the way we communicate, collaborate and collect information. It has created a new technology environment and paradigm in which our children and students grow-up and this impacts on their learning. Research confirmed that Generation Y students have a preference for learning in the new technology environment. The challenge or question is: How do we adjust our teaching and learning to make the most of these changes. The complexity of effective and efficient teaching and learning must not be underestimated and changes must be preceded by proper objective research to prevent any haphazard developments that could do more harm than benefit. A blended learning approach has been used in the Forestry department for a few numbers of years including the use of electronic-peer assisted learning (e-pal) in a fixed-computer set-up within a learning management system environment. It was decided to extend the investigation and do some exploratory research by using a range of different Tablet devices. For this purpose, learning activities or assignments were designed to cover aspects of communication, collaboration and collection of information. The Moodle learning management system was used to present normal module information, to communicate with students and for feedback and data collection. Student feedback was collected by using an online questionnaire and informal discussions. The research project was implemented in 2013, 2014 and 2015 amongst first and third-year students doing a forestry three-year technical tertiary qualification in commercial plantation management. In general, more than 80% of the students alluded to that the device was very useful in their learning environment while the rest indicated that the devices were not very useful. More than ninety percent of the students acknowledged that they would like to continue using the devices for all of their modules whilst the rest alluded to functioning efficiently without the devices. Results indicated that information collection (access to resources) was rated the highest advantageous factor followed by communication and collaboration. The main general advantages of using Tablets were listed by the students as being mobility (portability), 24/7 access to learning material and information of any kind on a user friendly device in a Wi-Fi environment, fast computing process speeds, saving time, effort and airtime through skyping and e-mail, and use of various applications. Ownership of the device is a critical factor while the risk was identified as a major potential constraint. Significant differences were reported between the different types and quality of Tablets. The preferred types are those with a bigger screen and the ones with overall better functionality and quality features. Tablets significantly increase the collaboration, communication and information collection needs of the students. It does, however, not replace the need of a computer/laptop because of limited storage and computation capacity, small screen size and inefficient typing.

Keywords: tablets, teaching, blended learning, tablet quality

Procedia PDF Downloads 247
352 Innovation Culture TV “Stars of Science”: 15 Seasons Case Study

Authors: Fouad Mrad, Viviane Zaccour

Abstract:

The accelerated developments in the political, economic, environmental, security, health, and social folders are exhausting planners across the world, especially in Arab countries. The impact of the tension is multifaceted and has resulted in conflicts, wars, migration, and human insecurity. The potential cross-cutting role that science, innovation and technology can play in supporting Arab societies to address these pressing challenges is a serious, unique chance for the people of the region. This opportunity is based on the existing capacity of educated youth and inaccessible talents in the local universities and research centers. It has been accepted that Arab countries have achieved major advancements in the economy, education and social wellbeing since the 70s of the 20th Century. Mainly direct outcome of the oil and other natural resources. The UN Secretary-General, during the Education Summit in Sep 2022, stressed that “Learning continues to underplay skills, including problem-solving, critical thinking and empathy.” Stars of Science by Qatar Foundation was launched in 2009 and has been sustained through 2023. Consistent mission from the start: To mobilize a new generation of Pan-Arab innovators and problem solvers by encouraging youth participation and interest in Science, Technology and Entrepreneurship throughout the Arab world via the program and its social media activities. To make science accessible and attractive to mass audiences by de-mystifying the process of innovation. Harnessing best practices within reality TV to show that science, engineering, and innovation are important in everyday life and can be fun.” Thousands of Participants learned unforgettable lessons; winners changed their lives forever as they learned and earned seed capital; they became drivers of change in their countries and families; millions of viewers were exposed to an innovative experimental process, and culturally, several relevant national institutions adopted the SOS track in their national initiatives. The program exhibited experientially youth self-efficacy as the most distinct core property of human agency, which is an individual's belief in his or her capacity to execute behaviors necessary to produce specific performance attainments. In addition, the program proved that innovations are performed by networks of people with different sets of technological, useful knowledge, skills and competencies introduced by socially shared technological knowledge as a main determinant of economic activities in any economy.

Keywords: science, invention, innovation, Qatar foundation, QSTP, prototyping

Procedia PDF Downloads 77
351 Immobilization of Horseradish Peroxidase onto Bio-Linked Magnetic Particles with Allium Cepa Peel Water Extracts

Authors: Mirjana Petronijević, Sanja Panić, Aleksandra Cvetanović, Branko Kordić, Nenad Grba

Abstract:

Enzyme peroxidases are biological catalysts and play a major role in phenolic wastewater treatments and other environmental applications. The most studied species from the peroxidases family is horseradish peroxidase (HRP). In environmental processes, HRP could be used in its free or immobilized form. Enzyme immobilization onto solid support is performed to improve the enzyme properties, prolong its lifespan and operational stability and allow its reuse in industrial applications. One of the enzyme supports of a newer generation is magnetic particles (MPs). Fe₃O₄ MPs are the most widely pursued immobilization of enzymes owing to their remarkable advantages of biocompatibility and non-toxicity. Also, MPs can be easily separated and recovered from the water by applying an external magnetic field. On the other hand, metals and metal oxides are not suitable for the covalent binding of enzymes, so it is necessary to perform their surface modification. Fe₃O₄ MPs functionalization could be performed during the process of their synthesis if it takes place in the presence of plant extracts. Extracts of plant material, such as wild plants, herbs, even waste materials of the food and agricultural industry (bark, shell, leaves, peel), are rich in various bioactive components such as polyphenols, flavonoids, sugars, etc. When the synthesis of magnetite is performed in the presence of plant extracts, bioactive components are incorporated into the surface of the magnetite, thereby affecting its functionalization. In this paper, the suitability of bio-magnetite as solid support for covalent immobilization of HRP across glutaraldehyde was examined. The activity of immobilized HRP at different pH values (4-9) and temperatures (20-80°C) and reusability were examined. Bio-MP was synthesized by co-precipitation method from Fe(II) and Fe(III) sulfate salts in the presence of water extract of the Allium cepa peel. The water extract showed 81% of antiradical potential (according to DPPH assay), which is connected with the high content of polyphenols. According to the FTIR analysis, the bio-magnetite contains oxygen functional groups (-OH, -COOH, C=O) suitable for binding to glutaraldehyde, after which the enzyme is covalently immobilized. The immobilized enzyme showed high activity at ambient temperature and pH 7 (30 U/g) and retained ≥ 80% of its activity at a wide range of pH (5-8) and temperature (20-50°C). The HRP immobilized onto bio-MPs showed remarkable stability towards temperature and pH variations compared to the free enzyme form. On the other hand, immobilized HRP showed low reusability after the first washing cycle enzyme retains 50% of its activity, while after the third washing cycle retains only 22%.

Keywords: bio-magnetite, enzyme immobilization, water extracts, environmental protection

Procedia PDF Downloads 222
350 Recognizing Human Actions by Multi-Layer Growing Grid Architecture

Authors: Z. Gharaee

Abstract:

Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.

Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance

Procedia PDF Downloads 157
349 Social Value of Travel Time Savings in Sub-Saharan Africa

Authors: Richard Sogah

Abstract:

The significance of transport infrastructure investments for economic growth and development has been central to the World Bank’s strategy for poverty reduction. Among the conventional surface transport infrastructures, road infrastructure is significant in facilitating the movement of human capital goods and services. When transport projects (i.e., roads, super-highways) are implemented, they come along with some negative social values (costs), such as increased noise and air pollution for local residents living near these facilities, displaced individuals, etc. However, these projects also facilitate better utilization of existing capital stock and generate other observable benefits that can be easily quantified. For example, the improvement or construction of roads creates employment, stimulates revenue generation (toll), reduces vehicle operating costs and accidents, increases accessibility, trade expansion, safety improvement, etc. Aside from these benefits, travel time savings (TTSs) which are the major economic benefits of urban and inter-urban transport projects and therefore integral in the economic assessment of transport projects, are often overlooked and omitted when estimating the benefits of transport projects, especially in developing countries. The absence of current and reliable domestic travel data and the inability of replicated models from the developed world to capture the actual value of travel time savings due to the large unemployment, underemployment, and other labor-induced distortions has contributed to the failure to assign value to travel time savings when estimating the benefits of transport schemes in developing countries. This omission of the value of travel time savings from the benefits of transport projects in developing countries poses problems for investors and stakeholders to either accept or dismiss projects based on schemes that favor reduced vehicular operating costs and other parameters rather than those that ease congestion, increase average speed, facilitate walking and handloading, and thus save travel time. Given the complex reality in the estimation of the value of travel time savings and the presence of widespread informal labour activities in Sub-Saharan Africa, we construct a “nationally ranked distribution of time values” and estimate the value of travel time savings based on the area beneath the distribution. Compared with other approaches, our method captures both formal sector workers and individuals/people who work outside the formal sector and hence changes in their time allocation occur in the informal economy and household production activities. The dataset for the estimations is sourced from the World Bank, the International Labour Organization, etc.

Keywords: road infrastructure, transport projects, travel time savings, congestion, Sub-Sahara Africa

Procedia PDF Downloads 107
348 In-situ Acoustic Emission Analysis of a Polymer Electrolyte Membrane Water Electrolyser

Authors: M. Maier, I. Dedigama, J. Majasan, Y. Wu, Q. Meyer, L. Castanheira, G. Hinds, P. R. Shearing, D. J. L. Brett

Abstract:

Increasing the efficiency of electrolyser technology is commonly seen as one of the main challenges on the way to the Hydrogen Economy. There is a significant lack of understanding of the different states of operation of polymer electrolyte membrane water electrolysers (PEMWE) and how these influence the overall efficiency. This in particular means the two-phase flow through the membrane, gas diffusion layers (GDL) and flow channels. In order to increase the efficiency of PEMWE and facilitate their spread as commercial hydrogen production technology, new analytic approaches have to be found. Acoustic emission (AE) offers the possibility to analyse the processes within a PEMWE in a non-destructive, fast and cheap in-situ way. This work describes the generation and analysis of AE data coming from a PEM water electrolyser, for, to the best of our knowledge, the first time in literature. Different experiments are carried out. Each experiment is designed so that only specific physical processes occur and AE solely related to one process can be measured. Therefore, a range of experimental conditions is used to induce different flow regimes within flow channels and GDL. The resulting AE data is first separated into different events, which are defined by exceeding the noise threshold. Each acoustic event consists of a number of consequent peaks and ends when the wave diminishes under the noise threshold. For all these acoustic events the following key attributes are extracted: maximum peak amplitude, duration, number of peaks, peaks before the maximum, average intensity of a peak and time till the maximum is reached. Each event is then expressed as a vector containing the normalized values for all criteria. Principal Component Analysis is performed on the resulting data, which orders the criteria by the eigenvalues of their covariance matrix. This can be used as an easy way of determining which criteria convey the most information on the acoustic data. In the following, the data is ordered in the two- or three-dimensional space formed by the most relevant criteria axes. By finding spaces in the two- or three-dimensional space only occupied by acoustic events originating from one of the three experiments it is possible to relate physical processes to certain acoustic patterns. Due to the complex nature of the AE data modern machine learning techniques are needed to recognize these patterns in-situ. Using the AE data produced before allows to train a self-learning algorithm and develop an analytical tool to diagnose different operational states in a PEMWE. Combining this technique with the measurement of polarization curves and electrochemical impedance spectroscopy allows for in-situ optimization and recognition of suboptimal states of operation.

Keywords: acoustic emission, gas diffusion layers, in-situ diagnosis, PEM water electrolyser

Procedia PDF Downloads 155
347 Company-Independent Standardization of Timber Construction to Promote Urban Redensification of Housing Stock

Authors: Andreas Schweiger, Matthias Gnigler, Elisabeth Wieder, Michael Grobbauer

Abstract:

Especially in the alpine region, available areas for new residential development are limited. One possible solution is to exploit the potential of existing settlements. Urban redensification, especially the addition of floors to existing buildings, requires efficient, lightweight constructions with short construction times. This topic is being addressed in the five-year Alpine Building Centre. The focus of this cooperation between Salzburg University of Applied Sciences and RSA GH Studio iSPACE is on transdisciplinary research in the fields of building and energy technology, building envelopes and geoinformation, as well as the transfer of research results to industry. One development objective is a system of wood panel system construction with a high degree of prefabrication to optimize the construction quality, the construction time and the applicability for small and medium-sized enterprises. The system serves as a reliable working basis for mastering the complex building task of redensification. The technical solution is the development of an open system in timber frame and solid wood construction, which is suitable for a maximum two-story addition of residential buildings. The applicability of the system is mainly influenced by the existing building stock. Therefore, timber frame and solid timber construction are combined where necessary to bridge large spans of the existing structure while keeping the dead weight as low as possible. Escape routes are usually constructed in reinforced concrete and are located outside the system boundary. Thus, within the framework of the legal and normative requirements of timber construction, a hybrid construction method for redensification created. Component structure, load-bearing structure and detail constructions are developed in accordance with the relevant requirements. The results are directly applicable in individual cases, with the exception of the required verifications. In order to verify the practical suitability of the developed system, stakeholder workshops are held on the one hand, and the system is applied in the planning of a two-storey extension on the other hand. A company-independent construction standard offers the possibility of cooperation and bundling of capacities in order to be able to handle larger construction volumes in collaboration with several companies. Numerous further developments can take place on the basis of the system, which is under open license. The construction system will support planners and contractors from design to execution. In this context, open means publicly published and freely usable and modifiable for own use as long as the authorship and deviations are mentioned. The companies are provided with a system manual, which contains the system description and an application manual. This manual will facilitate the selection of the correct component cross-sections for the specific construction projects by means of all component and detail specifications. This presentation highlights the initial situation, the motivation, the approach, but especially the technical solution as well as the possibilities for the application. After an explanation of the objectives and working methods, the component and detail specifications are presented as work results and their application.

Keywords: redensification, SME, urban development, wood building system

Procedia PDF Downloads 109
346 An Alternative to Problem-Based Learning in a Post-Graduate Healthcare Professional Programme

Authors: Brogan Guest, Amy Donaldson-Perrott

Abstract:

The Master’s of Physician Associate Studies (MPAS) programme at St George’s, University of London (SGUL), is an intensive two-year course that trains students to become physician associates (PAs). PAs are generalized healthcare providers who work in primary and secondary care across the UK. PA programmes face the difficult task of preparing students to become safe medical providers in two short years. Our goal is to teach students to develop clinical reasoning early on in their studies and historically, this has been done predominantly though problem-based learning (PBL). We have had an increase concern about student engagement in PBL and difficulty recruiting facilitators to maintain the low student to facilitator ratio required in PBL. To address this issue, we created ‘Clinical Application of Anatomy and Physiology (CAAP)’. These peer-led, interactive, problem-based, small group sessions were designed to facilitate students’ clinical reasoning skills. The sessions were designed using the concept of Team-Based Learning (TBL). Students were divided into small groups and each completed a pre-session quiz consisting of difficult questions devised to assess students’ application of medical knowledge. The quiz was completed in small groups and they were not permitted access of external resources. After the quiz, students worked through a series of openended, clinical tasks using all available resources. They worked at their own pace and the session was peer-led, rather than facilitator-driven. For a group of 35 students, there were two facilitators who observed the sessions. The sessions utilised an infinite space whiteboard software. Each group member was encouraged to actively participate and work together to complete the 15-20 tasks. The session ran for 2 hours and concluded with a post-session quiz, identical to the pre-session quiz. We obtained subjective feedback from students on their experience with CAAP and evaluated the objective benefit of the sessions through the quiz results. Qualitative feedback from students was generally positive with students feeling the sessions increased engagement, clinical understanding, and confidence. They found the small group aspect beneficial and the technology easy to use and intuitive. They also liked the benefit of building a resource for their future revision, something unique to CAAP compared to PBL, which out students participate in weekly. Preliminary quiz results showed improvement from pre- and post- session; however, further statistical analysis will occur once all sessions are complete (final session to run December 2022) to determine significance. As a post-graduate healthcare professional programme, we have a strong focus on self-directed learning. Whilst PBL has been a mainstay in our curriculum since its inception, there are limitations and concerns about its future in view of student engagement and facilitator availability. Whilst CAAP is not TBL, it draws on the benefits of peer-led, small group work with pre- and post- team-based quizzes. The pilot of these sessions has shown that students are engaged by CAAP, and they can make significant progress in clinical reasoning in a short amount of time. This can be achieved with a high student to facilitator ratio.

Keywords: problem based learning, team based learning, active learning, peer-to-peer teaching, engagement

Procedia PDF Downloads 80