Search results for: real time control
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27797

Search results for: real time control

257 Health Risk Assessment from Potable Water Containing Tritium and Heavy Metals

Authors: Olga A. Momot, Boris I. Synzynys, Alla A. Oudalova

Abstract:

Obninsk is situated in the Kaluga region 100 km southwest of Moscow on the left bank of the Protva River. Several enterprises utilizing nuclear energy are operating in the town. A special attention in the region where radiation-hazardous facilities are located has traditionally been paid to radioactive gas and aerosol releases into the atmosphere; liquid waste discharges into the Protva river and groundwater pollution. Municipal intakes involve 34 wells arranged 15 km apart in a sequence north-south along the foot of the left slope of the Protva river valley. Northern and southern water intakes are upstream and downstream of the town, respectively. They belong to river valley intakes with mixed feeding, i.e. precipitation infiltration is responsible for a smaller part of groundwater, and a greater amount is being formed by overflowing from Protva. Water intakes are maintained by the Protva river runoff, the volume of which depends on the precipitation fallen out and watershed area. Groundwater contamination with tritium was first detected in a sanitary-protective zone of the Institute of Physics and Power Engineering (SRC-IPPE) by Roshydromet researchers when realizing the “Program of radiological monitoring in the territory of nuclear industry enterprises”. A comprehensive survey of the SRC-IPPE’s industrial site and adjacent territories has revealed that research nuclear reactors and accelerators where tritium targets are applied as well as radioactive waste storages could be considered as potential sources of technogenic tritium. All the above sources are located within the sanitary controlled area of intakes. Tritium activity in water of springs and wells near the SRC-IPPE is about 17.4 – 3200 Bq/l. The observed values of tritium activity are below the intervention levels (7600 Bq/l for inorganic compounds and 3300 Bq/l for organically bound tritium). The risk has being assessed to estimate possible effect of considered tritium concentrations on human health. Data on tritium concentrations in pipe-line drinking water were used for calculations. The activity of 3H amounted to 10.6 Bq/l and corresponded to the risk of such water consumption of ~ 3·10-7 year-1. The risk value given in magnitude is close to the individual annual death risk for population living near a NPP – 1.6·10-8 year-1 and at the same time corresponds to the level of tolerable risk (10-6) and falls within “risk optimization”, i.e. in the sphere for planning the economically sound measures on exposure risk reduction. To estimate the chemical risk, physical and chemical analysis was made of waters from all springs and wells near the SRC-IPPE. Chemical risk from groundwater contamination was estimated according to the EPA US guidance. The risk of carcinogenic diseases at a drinking water consumption amounts to 5·10-5. According to the classification accepted the health risk in case of spring water consumption is inadmissible. The compared assessments of risk associated with tritium exposure, on the one hand, and the dangerous chemical (e.g. heavy metals) contamination of Obninsk drinking water, on the other hand, have confirmed that just these chemical pollutants are responsible for health risk.

Keywords: radiation-hazardous facilities, water intakes, tritium, heavy metal, health risk

Procedia PDF Downloads 218
256 Long-Term Exposure Assessments for Cooking Workers Exposed to Polycyclic Aromatic Hydrocarbons and Aldehydes Containing in Cooking Fumes

Authors: Chun-Yu Chen, Kua-Rong Wu, Yu-Cheng Chen, Perng-Jy Tsai

Abstract:

Cooking fumes are known containing polycyclic aromatic hydrocarbons (PAHs) and aldehydes, and some of them have been proven carcinogenic or possibly carcinogenic to humans. Considering their chronic health effects, long-term exposure data is required for assessing cooking workers’ lifetime health risks. Previous exposure assessment studies, due to both time and cost constraints, mostly were based on the cross-sectional data. Therefore, establishing a long-term exposure data has become an important issue for conducting health risk assessment for cooking workers. An approach was proposed in this study. Here, the generation rates of both PAHs and aldehydes from a cooking process were determined by placing a sampling train exactly under the under the exhaust fan under the both the total enclosure condition and normal operating condition, respectively. Subtracting the concentration collected by the former (representing the total emitted concentration) from that of the latter (representing the hood collected concentration), the fugitive emitted concentration was determined. The above data was further converted to determine the generation rates based on the flow rates specified for the exhaust fan. The determinations of the above generation rates were conducted in a testing chamber with a selected cooking process (deep-frying chicken nuggets under 3 L peanut oil at 200°C). The sampling train installed under the exhaust fan consisted respectively an IOM inhalable sampler with a glass fiber filter for collecting particle-phase PAHs, followed by a XAD-2 tube for gas-phase PAHs. The above was also used to sample aldehydes, however, installed with a filter pre-coated with DNPH, and followed by a 2,4-DNPH-cartridge for collecting particle-phase and gas-phase aldehydes, respectively. PAHs and aldehydes samples were analyzed by GC/MS-MS (Agilent 7890B), and HPLC-UV (HITACHI L-7100), respectively. The obtained generation rates of both PAHs and aldehydes were applied to the near-field/ far-field exposure model to estimate the exposures of cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration). For validating purposes, both PAHs and aldehydes samplings were conducted simultaneously using the same sampling train at both near-field and far-field sites of the testing chamber. The sampling results, together with the use of the mixed-effect model, were used to calibrate the estimated near-field/ far-field exposures. In the present study, the obtained emission rates were further converted to emission factor of both PAHs and aldehydes according to the amount of food oil consumed. Applying the long-term food oil consumption records, the emission rates for both PAHs and aldehydes were determined, and the long-term exposure databanks for cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration) were then determined. Results show that the proposed approach was adequate to determine the generation rates of both PAHs and aldehydes under various fan exhaust flow rate conditions. The estimated near-field/ far-field exposures, though were significantly different from that obtained from the field, can be calibrated using the mixed effect model. Finally, the established long-term data bank could provide a useful basis for conducting long-term exposure assessments for cooking workers exposed to PAHs and aldehydes.

Keywords: aldehydes, cooking oil fumes, long-term exposure assessment, modeling, polycyclic aromatic hydrocarbons (PAHs)

Procedia PDF Downloads 115
255 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada

Authors: Stefan W. Kienzle

Abstract:

The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.

Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes

Procedia PDF Downloads 71
254 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications

Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino

Abstract:

The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.

Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses

Procedia PDF Downloads 165
253 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 197
252 Application of Self-Efficacy Theory in Counseling Deaf and Hard of Hearing Students

Authors: Nancy A. Delich, Stephen D. Roberts

Abstract:

This case study explores using self-efficacy theory in counseling deaf and hard of hearing students in one California school district. Self-efficacy is described as the confidence a student has for performing a set of skills required to succeed at a specific task. When students need to learn a skill, self-efficacy can be a major factor in influencing behavioral change. Self-efficacy is domain specific, meaning that students can have high confidence in their abilities to accomplish a task in one domain, while at the same time having low confidence in their abilities to accomplish another task in a different domain. The communication isolation experienced by deaf and hard of hearing children and adolescents can negatively impact their belief about their ability to navigate life challenges. There is a need to address issues that impact deaf and hard of hearing students’ social-emotional development. Failure to address these needs may result in depression, suicidal ideation, and anxiety among other mental health concerns. Self-efficacy training can be used to address these socio-emotional developmental issues with this population. Four sources of experiences are applied during an intervention: (a) enactive mastery experience, (b) vicarious experience, (c) verbal persuasion, and (d) physiological and affective states. This case study describes the use of self-efficacy training with a coed group of 12 deaf and hard of hearing high school students who experienced bullying at school. Beginning with enactive mastery experience, the counselor introduced the topic of bullying to the group. The counselor educated the students about the different types of bullying while teaching them the terminology, signs and their meanings. The most effective way to increase self-efficacy is through extensive practice. To better understand these concepts, the students practiced through role-playing with the goal of developing self-advocacy skills. Vicarious experience is the perception that students have about their capabilities. Viewing other students advocating for themselves, cognitively rehearsing what actions they will and will not take, and teaching each other how to stand up against bullying can strengthen their belief in successfully overcoming bullying. The third source of self-efficacy beliefs is verbal persuasion. It occurs when others express belief in the capabilities of the student. Didactic training and pedagogic materials on bullying were employed as part of the group counseling sessions. The fourth source of self-efficacy appraisals is physiological and affective states. Students expect positive emotions to be associated with successful skilled performance. When students practice new skills, the counselor can apply several strategies to enhance self-efficacy while reducing and controlling emotional and physical states. The intervention plan incorporated all four sources of self-efficacy training during several interactive group sessions regarding bullying. There was an increased understanding around the issues of bullying, resulting in the students’ belief of their ability to perform protective behaviors and deter future occurrences. The outcome of the intervention plan resulted in a reduction of reported bullying incidents. In conclusion, self-efficacy training can be an effective counseling and teaching strategy in addressing and enhancing the social-emotional functioning with deaf and hard of hearing adolescents.

Keywords: counseling, self-efficacy, bullying, social-emotional development, mental health, deaf and hard of hearing students

Procedia PDF Downloads 327
251 Development of PCL/Chitosan Core-Shell Electrospun Structures

Authors: Hilal T. Sasmazel, Seda Surucu

Abstract:

Skin tissue engineering is a promising field for the treatment of skin defects using scaffolds. This approach involves the use of living cells and biomaterials to restore, maintain, or regenerate tissues and organs in the body by providing; (i) larger surface area for cell attachment, (ii) proper porosity for cell colonization and cell to cell interaction, and (iii) 3-dimensionality at macroscopic scale. Recent studies on this area mainly focus on fabrication of scaffolds that can closely mimic the natural extracellular matrix (ECM) for creation of tissue specific niche-like environment at the subcellular scale. Scaffolds designed as ECM-like architectures incorporating into the host with minimal scarring/pain and facilitate angiogenesis. This study is related to combining of synthetic PCL and natural chitosan polymers to form 3D PCL/Chitosan core-shell structures for skin tissue engineering applications. Amongst the polymers used in tissue engineering, natural polymer chitosan and synthetic polymer poly(ε-caprolactone) (PCL) are widely preferred in the literature. Chitosan has been among researchers for a very long time because of its superior biocompatibility and structural resemblance to the glycosaminoglycan of bone tissue. However, the low mechanical flexibility and limited biodegradability properties reveals the necessity of using this polymer in a composite structure. On the other hand, PCL is a versatile polymer due to its low melting point (60°C), ease of processability, degradability with non-enzymatic processes (hydrolysis) and good mechanical properties. Nevertheless, there are also several disadvantages of PCL such as its hydrophobic structure, limited bio-interaction and susceptibility to bacterial biodegradation. Therefore, it became crucial to use both of these polymers together as a hybrid material in order to overcome the disadvantages of both polymers and combine advantages of those. The scaffolds here were fabricated by using electrospinning technique and the characterizations of the samples were done by contact angle (CA) measurements, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-Ray Photoelectron spectroscopy (XPS). Additionally, gas permeability test, mechanical test, thickness measurement and PBS absorption and shrinkage tests were performed for all type of scaffolds (PCL, chitosan and PCL/chitosan core-shell). By using ImageJ launcher software program (USA) from SEM photographs the average inter-fiber diameter values were calculated as 0.717±0.198 µm for PCL, 0.660±0.070 µm for chitosan and 0.412±0.339 µm for PCL/chitosan core-shell structures. Additionally, the average inter-fiber pore size values exhibited decrease of 66.91% and 61.90% for the PCL and chitosan structures respectively, compare to PCL/chitosan core-shell structures. TEM images proved that homogenous and continuous bead free core-shell fibers were obtained. XPS analysis of the PCL/chitosan core-shell structures exhibited the characteristic peaks of PCL and chitosan polymers. Measured average gas permeability value of produced PCL/chitosan core-shell structure was determined 2315±3.4 g.m-2.day-1. In the future, cell-material interactions of those developed PCL/chitosan core-shell structures will be carried out with L929 ATCC CCL-1 mouse fibroblast cell line. Standard MTT assay and microscopic imaging methods will be used for the investigation of the cell attachment, proliferation and growth capacities of the developed materials.

Keywords: chitosan, coaxial electrospinning, core-shell, PCL, tissue scaffold

Procedia PDF Downloads 459
250 Application of Pedicled Perforator Flaps in Large Cavities of the Breast

Authors: Neerja Gupta

Abstract:

Objective-Reconstruction of large cavities of the breast without contralateral symmetrisation Background- Reconstruction of breast includes a wide spectrum of procedures from displacement to regional and distant flaps. The pedicled Perforator flaps cover a wide spectrum of reconstruction surgery for all quadrants of the breast, especially in patients with comorbidities. These axial flaps singly or adjunct are based on a near constant perforator vessel, a ratio of 2:1 at its entry in a flap is good to maintain vascularity. The perforators of lateral chest wall viz LICAP, LTAP have overlapping perfurosomes without clear demarcation. LTAP is localized in the narrow zone between the lateral breast fold and anterior axillary line,2.5-3.8cm from the fold. MICAP are localized at 1-2 cm from sternum. Being 1-2mm in diameter, a Single perforator is good to maintain the flap. LICAP has a dominant perforator in 6th-11th spaces, while LTAP has higher placed dominant perforators in 4th and 5th spaces. Methodology-Six consecutive patients who underwent reconstruction of the breast with pedicled perforator flaps were retrospectively analysed. Selections of the flap was done based on the size and locations of the tumour, anticipated volume loss, willingness to undergo contralateral symmetrisation, cosmetic expectations, and finances available.3 patients underwent vertical LTAP, the distal limit of the flap being the inframammary crease. 3 patients underwent MICAP, oriented along the axis of rib, the distal limit being the anterior axillary line. Preoperative identification was done using a unidirectional hand held doppler. The flap was raised caudal to cranial, the pivot point of rotation being the vessel entry into the skin. The donor area is determined by the skin pinch. Flap harvest time was 20-25 minutes. Intra operative vascularity was assessed with dermal bleed. The patient immediate pre, post-operative and follow up pics were compared independently by two breast surgeons. Patients were given a breast Q questionnaire (licensed) for scoring. Results-The median age of six patients was 46. Each patient had a hospital stay of 24 hours. None of the patients was willing for contralateral symmetrisation. The specimen dimensions were from 8x6.8x4 cm to 19x16x9 cm. The breast volume reconstructed range was 30 percent to 45 percent. All wide excision had free margins on frozen. The mean flap dimensions were 12x5x4.5 cm. One LTAP underwent marginal necrosis and delayed wound healing due to seroma. Three patients were phyllodes, of which one was borderline, and 2 were benign on final histopathology. All other 3 patients were invasive ductal cancer and have completed their radiation. The median follow up is 7 months the satisfaction scores at median follow of 7 months are 90 for physical wellbeing and 85 for surgical results. Surgeons scored fair to good in Harvard score. Conclusion- Pedicled perforator flaps are a valuable option for 3/8th volume of breast defects. LTAP is preferred for tumours at the Central, upper, and outer quadrants of the breast and MICAP for the inner and lower quadrant. The vascularity of the flap is dependent on the angiosomalterritories; adequate venous and cavity drainage.

Keywords: breast, oncoplasty, pedicled, perforator

Procedia PDF Downloads 165
249 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 158
248 Improving Recovery Reuse and Irrigation Scheme Efficiency – North Gaza Emergency Sewage Treatment Project as Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million inhabitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed an effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, north gaza

Procedia PDF Downloads 290
247 Stuck Spaces as Moments of Learning: Uncovering Threshold Concepts in Teacher Candidate Experiences of Teaching in Inclusive Classrooms

Authors: Joy Chadwick

Abstract:

There is no doubt that classrooms of today are more complex and diverse than ever before. Preparing teacher candidates to meet these challenges is essential to ensure the retention of teachers within the profession and to ensure that graduates begin their teaching careers with the knowledge and understanding of how to effectively meet the diversity of students they will encounter. Creating inclusive classrooms requires teachers to have a repertoire of effective instructional skills and strategies. Teachers must also have the mindset to embrace diversity and value the uniqueness of individual students in their care. This qualitative study analyzed teacher candidates' experiences as they completed a fourteen-week teaching practicum while simultaneously completing a university course focused on inclusive pedagogy. The research investigated the challenges and successes teacher candidates had in navigating the translation of theory related to inclusive pedagogy into their teaching practice. Applying threshold concept theory as a framework, the research explored the troublesome concepts, liminal spaces, and transformative experiences as connected to inclusive practices. Threshold concept theory suggests that within all disciplinary fields, there exists particular threshold concepts that serve as gateways or portals into previously inaccessible ways of thinking and practicing. It is in these liminal spaces that conceptual shifts in thinking and understanding and deep learning can occur. The threshold concept framework provided a lens to examine teacher candidate struggles and successes with the inclusive education course content and the application of this content to their practicum experiences. A qualitative research approach was used, which included analyzing twenty-nine course reflective journals and six follow up one-to-one semi structured interviews. The journals and interview transcripts were coded and themed using NVivo software. Threshold concept theory was then applied to the data to uncover the liminal or stuck spaces of learning and the ways in which the teacher candidates navigated those challenging places of teaching. The research also sought to uncover potential transformative shifts in teacher candidate understanding as connected to teaching in an inclusive classroom. The findings suggested that teacher candidates experienced difficulties when they did not feel they had the knowledge, skill, or time to meet the needs of the students in the way they envisioned they should. To navigate the frustration of this thwarted vision, they relied on present and previous course content and experiences, collaborative work with other teacher candidates and their mentor teachers, and a proactive approach to planning for students. Transformational shifts were most evident in their ability to reframe their perceptions of children from a deficit or disability lens to a strength-based belief in the potential of students. It was evident that through their course work and practicum experiences, their beliefs regarding struggling students shifted as they saw the value of embracing neurodiversity, the importance of relationships, and planning for and teaching through a strength-based approach. Research findings have implications for teacher education programs and for understanding threshold concepts theory as connected to practice-based learning experiences.

Keywords: inclusion, inclusive education, liminal space, teacher education, threshold concepts, troublesome knowledge

Procedia PDF Downloads 39
246 Improvement of Autism Diagnostic Observation Schedule Scores after Comprehensive Intensive Early Interventions in a Clinical Setting

Authors: Nils Haglund, Svenolof Dahlgren, Maria Rastam, Peik Gustafsson, Karin Kalien

Abstract:

In Sweden, like in most developed countries, there is a substantial increase of children diagnosed with autism and other conditions within the autism spectrum (ASD). The rapid increase of ASD rates stresses the importance of developing care programs to provide support and comprehensive interventions for affected families. The current observational study was conducted in order to evaluate an ongoing Comprehensive Intensive Early Intervention (CIEI) program for children with autism in southern Sweden. The change in autism symptoms among children participating in CIEI (intervention group, n=67) was compared with children who received traditional habilitation services only (comparison group, n=27). Children of parents who accepted the offered CIEI-program, constituted the intervention group, whereas children, whose parents (for some reason) were not interested in the offered CIEI-program, constituted the comparison group. The CIEI-program was individualized to each child by experienced applied behavior analysis (ABA) specialists with different backgrounds as psychologists, speech pathologists or special education teachers, in cooperation with parents and preschool staff. Due to the individualization, the intervention could vary in intensity and techniques. The intensity was calculated to 15-25 hours each week at home and the preschool altogether. Each child was assigned one 'trainer', who was often employed as a preschool teacher but could have another educational background. An agreement between supervisor- parents and preschool staff was reached to confirm the intensity and content of the CIEI- program over an approximately two-year intervention period. Symptom changes were measured as evaluation-ADOS-2-scores, total- and severity-scores, minus the corresponding baseline-scores, divided by the time between baseline and evaluation. The difference between the study-groups regarding change of ADOS-2-scores was estimated using ANCOVA. In the current study, children in the CIEI-group improved their ADOS-2-total scores between baseline and evaluation (-0.8 scores per year; 95%CI: -1.2 to -0.4), whereas no such improvement was detected in the comparison group (+0.1 scores per year; 95%CI: -0.7 to +0.9). The change difference (change in the CIEI-group vs. change in the comparison group) was statistically significant, both crude and after adjusting for possible confounders (-1.1; 95%CI -1.9 to -0.4). Children in the CIEI-group also significantly improved their ADOS-calibrated severity scores, but not significantly differently so from the comparison group. The results from the current study indicate that the CIEI program significantly improves social and communicative skills among children with autism and that children with developmental delay could benefit to a similar degree as other children. The results support earlier studies reporting on the improvement of autism symptoms after early intensive interventions. The results from observational studies are difficult to interpret, but it is nevertheless of uttermost importance to evaluate costly autism intervention programs. Such results may be of immediate importance to healthcare organizations when allocating the already strained resources to different patient groups. Albeit the obvious limitation of the current naturalistic study, the results support previous positive studies and indicate that children with autism benefit from participating in early comprehensive, intensive programs and that investments in these programs may be highly justifiable.

Keywords: autism symptoms, ADOS-scores, evaluation, intervention program

Procedia PDF Downloads 120
245 Asparagus racemosus Willd for Enhanced Medicinal Properties

Authors: Ashok Kumar, Parveen Parveen

Abstract:

India is bestowed with an extremely high population of plant species with medicinal value and even has two biodiversity hotspots. Indian systems of medicine including Ayurveda, Siddha and Unani have historically been serving humankind across the world since time immemorial. About 1500 plant species have well been documented in Ayurvedic Nighantus as official medicinal plants. Additionally, several hundred species of plants are being routinely used as medicines by local people especially tribes living in and around forests. The natural resources for medicinal plants have unscientifically been over-exploited forcing rapid depletion in their genetic diversity. Moreover, renewed global interest in herbal medicines may even lead to additional depletion of medicinal plant wealth of the country, as about 95% collection of medicinal plants for pharmaceutical preparation is being carried out from natural forests. On the other hand, huge export market of medicinal and aromatic plants needs to be seriously tapped for enhancing inflow of foreign currency. Asparagus racemosus Willd., a member of family Liliaceae, is one of thirty-two plant species that have been identified as priority species for cultivation and conservation by the National Medicinal Plant Board (NMPB), Government of India. Though attention is being focused on standardization of agro-techniques and extraction methods, little has been designed on genetic improvement and selection of desired types with higher root production and saponin content, a basic ingredient of medicinal value. The saponin not only improves defense mechanisms and controls diabetes but the roots of this species promote secretion of breast milk, improved lost body weight and considered as an aphrodisiac. There is ample scope for genetic improvement of this species for enhancing productivity substantially, qualitatively and quantitatively. It is emphasized to select desired genotypes with sufficient genetic diversity for important economic traits. Hybridization between two genetically divergent genotypes could result in the synthesis of new F1 hybrids consisting of useful traits of both the parents. The evaluation of twenty seed sources of Asparagus racemosus assembled different geographical locations of India revelled high degree of variability for traits of economic importance. The maximum genotypic and phenotypic variance was observed for shoot height among shoot related traits and for root length among root related traits. The shoot height, genotypic variance, phenotypic variance, genotypic coefficient of variance, the phenotypic coefficient of variance was recorded to be 231.80, 3924.80, 61.26 and 1037.32, respectively, where those of the root length were 9.55, 16.80, 23.46 and 41.27, respectively. The maximum genetic advance and genetic gain were obtained for shoot height among shoot-related traits and root length among root-related traits. Index values were developed for all seed sources based on the four most important traits, and Panthnagar (Uttrakhand), Jodhpur (Rajasthan), Dehradun (Uttarakhand), Chandigarh (Punjab), Jammu (Jammu & Kashmir) and Solan (Himachal Pradesh) were found to be promising seed sources.

Keywords: asparagus, genetic, genotypes, variance

Procedia PDF Downloads 108
244 Co2e Sequestration via High Yield Crops and Methane Capture for ZEV Sustainable Aviation Fuel

Authors: Bill Wason

Abstract:

143 Crude Palm Oil Coop mills on Sumatra Island are participating in a program to transfer land from defaulted estates to small farmers while improving the sustainability of palm production to allow for biofuel & food production. GCarbon will be working with farmers to transfer technology, fertilizer, and trees to double the yield from the current baseline of 3.5 tons to at least 7 tons of oil per ha (25 tons of fruit bunches). This will be measured via evaluation of yield comparisons between participant and non-participant farms. We will also capture methane from Palm Oil Mill Effluent (POME)throughbelt press filtering. Residues will be weighed and a formula used to estimate methane emission reductions based on methodologies developed by other researchers. GCarbon will also cover mill ponds with a non-permeable membrane and collect methane for energy or steam production. A system for accelerating methane production involving ozone and electro-flocculation will be tested to intensifymethane generation and reduce the time for wastewater treatment. A meta-analysis of research on sweet potatoes and sorghum as rotation crops will look at work in the Rio Grande do Sul, Brazil where5 ha. oftest plots of industrial sweet potato have achieved yields of 60 tons and 40 tons per ha. from 2 harvests in one year (100 MT/ha./year). Field trials will be duplicated in Bom Jesus Das Selvas, Maranhaothat will test varieties of sweet potatoes to measure yields and evaluate disease risks in a very different soil and climate of NE Brazil. Hog methane will also be captured. GCarbon Brazil, Coop Sisal, and an Australian research partner will plant several varieties of agave and use agronomic procedures to get yields of 880 MT per ha. over 5 years. They will also plant new varieties expected to get 3500 MT of biomass after 5 years (176-700 MT per ha. per year). The goal is to show that the agave can adapt to Brazil’s climate without disease problems. The study will include a field visit to growing sites in Australia where agave is being grown commercially for biofuels production. Researchers will measure the biomass per hectare at various stages in the growing cycle, sugar content at harvest, and other metrics to confirm the yield of sugar per ha. is up to 10 times greater than sugar cane. The study will look at sequestration rates from measuring soil carbon and root accumulation in various plots in Australia to confirm carbon sequestered from 5 years of production. The agave developer estimates that 60-80 MT of sequestration per ha. per year occurs from agave. The three study efforts in 3 different countries will define a feedstock pathway for jet fuel that involves very high yield crops that can produce 2 to 10 times more biomass than current assumptions. This cost-effective and less land intensive strategy will meet global jet fuel demand and produce huge quantities of food for net zero aviation and feeding 9-10 billion people by 2050

Keywords: zero emission SAF, methane capture, food-fuel integrated refining, new crops for SAF

Procedia PDF Downloads 75
243 Effects of the In-Situ Upgrading Project in Afghanistan: A Case Study on the Formally and Informally Developed Areas in Kabul

Authors: Maisam Rafiee, Chikashi Deguchi, Akio Odake, Minoru Matsui, Takanori Sata

Abstract:

Cities in Afghanistan have been rapidly urbanized; however, many parts of these cities have been developed with no detailed land use plan or infrastructure. In other words, they have been informally developed without any government leadership. The new government started the In-situ Upgrading Project in Kabul to upgrade roads, the water supply network system, and the surface water drainage system on the existing street layout in 2002, with the financial support of international agencies. This project is an appropriate emergency improvement for living life, but not an essential improvement of living conditions and infrastructure problems because the life expectancies of the improved facilities are as short as 10–15 years, and residents cannot obtain land tenure in the unplanned areas. The Land Readjustment System (LRS) conducted in Japan has good advantages that rearrange irregularly shaped land lots and develop the infrastructure effectively. This study investigates the effects of the In-situ Upgrading Project on private investment, land prices, and residents’ satisfaction with projects in Kart-e-Char, where properties are registered, and in Afshar-e-Silo Lot 1, where properties are unregistered. These projects are located 5 km and 7 km from the CBD area of Kabul, respectively. This study discusses whether LRS should be applied to the unplanned area based on the questionnaire and interview responses of experts experienced in the In-situ Upgrading Project who have knowledge of LRS. The analysis results reveal that, in Kart-e-Char, a lot of private investment has been made in the construction of medium-rise (five- to nine-story) buildings for commercial and residential purposes. Land values have also incrementally increased since the project, and residents are commonly satisfied with the road pavement, drainage systems, and water supplies, but dissatisfied with the poor delivery of electricity as well as the lack of public facilities (e.g., parks and sport facilities). In Afshar-e-Silo Lot 1, basic infrastructures like paved roads and surface water drainage systems have improved from the project. After the project, a few four- and five-story residential buildings were built with very low-level private investments, but significant increases in land prices were not evident. The residents are satisfied with the contribution ratio, drainage system, and small increase in land price, but there is still no drinking water supply system or tenure security; moreover, there are substandard paved roads and a lack of public facilities, such as parks, sport facilities, mosques, and schools. The results of the questionnaire and interviews with the four engineers highlight the problems that remain to be solved in the unplanned areas if LRS is applied—namely, land use differences, types and conditions of the infrastructure still to be installed by the project, and time spent for positive consensus building among the residents, given the project’s budget limitation.

Keywords: in-situ upgrading, Kabul city, land readjustment, land value, planned area, private investment, residents' satisfaction, unplanned area

Procedia PDF Downloads 164
242 Subcontractor Development Practices and Processes: A Conceptual Model for LEED Projects

Authors: Andrea N. Ofori-Boadu

Abstract:

The purpose is to develop a conceptual model of subcontractor development practices and processes that strengthen the integration of subcontractors into construction supply chain systems for improved subcontractor performance on Leadership in Energy and Environmental Design (LEED) certified building projects. The construction management of a LEED project has an important objective of meeting sustainability certification requirements. This is in addition to the typical project management objectives of cost, time, quality, and safety for traditional projects; and, therefore increases the complexity of LEED projects. Considering that construction management organizations rely heavily on subcontractors, poor performance on complex projects such as LEED projects has been largely attributed to the unsatisfactory preparation of subcontractors. Furthermore, the extensive use of unique and non-repetitive short term contracts limits the full integration of subcontractors into construction supply chains and hinders long-term cooperation and benefits that could enhance performance on construction projects. Improved subcontractor development practices are needed to better prepare and manage subcontractors, so that complex objectives can be met or exceeded. While supplier development and supply chain theories and practices for the manufacturing sector have been extensively investigated to address similar challenges, investigations in the construction sector are not that obvious. Consequently, the objective of this research is to investigate effective subcontractor development practices and processes to guide construction management organizations in their development of a strong network of high performing subcontractors. Drawing from foundational supply chain and supplier development theories in the manufacturing sector, a mixed interpretivist and empirical methodology is utilized to assess the body of knowledge within literature for conceptual model development. A self-reporting survey with five-point Likert scale items and open-ended questions is administered to 30 construction professionals to estimate their perceptions of the effectiveness of 37 practices, classified into five subcontractor development categories. Data analysis includes descriptive statistics, weighted means, and t-tests that guide the effectiveness ranking of practices and categories. The results inform the proposed three-phased LEED subcontractor development program model which focuses on preparation, development and implementation, and monitoring. Highly ranked LEED subcontractor pre-qualification, commitment, incentives, evaluation, and feedback practices are perceived as more effective, when compared to practices requiring more direct involvement and linkages between subcontractors and construction management organizations. This is attributed to unfamiliarity, conflicting interests, lack of trust, and resource sharing challenges. With strategic modifications, the recommended practices can be extended to other non-LEED complex projects. Additional research is needed to guide the development of subcontractor development programs that strengthen direct involvement between construction management organizations and their network of high performing subcontractors. Insights from this present research strengthen theoretical foundations to support future research towards more integrated construction supply chains. In the long-term, this would lead to increased performance, profits and client satisfaction.

Keywords: construction management, general contractor, supply chain, sustainable construction

Procedia PDF Downloads 91
241 Post-Exercise Recovery Tracking Based on Electrocardiography-Derived Features

Authors: Pavel Bulai, Taras Pitlik, Tatsiana Kulahava, Timofei Lipski

Abstract:

The method of Electrocardiography (ECG) interpretation for post-exercise recovery tracking was developed. Metabolic indices (aerobic and anaerobic) were designed using ECG-derived features. This study reports the associations between aerobic and anaerobic indices and classical parameters of the person’s physiological state, including blood biochemistry, glycogen concentration and VO2max changes. During the study 9 participants, healthy, physically active medium trained men and women, which trained 2-4 times per week for at least 9 weeks, fulfilled (i) ECG monitoring using Apple Watch Series 4 (AWS4); (ii) blood biochemical analysis; (iii) maximal oxygen consumption (VO2max) test, (iv) bioimpedance analysis (BIA). ECG signals from a single-lead wrist-wearable device were processed with detection of QRS-complex. Aerobic index (AI) was derived as the normalized slope of QR segment. Anaerobic index (ANI) was derived as the normalized slope of SJ segment. Biochemical parameters, glycogen content and VO2max were evaluated eight times within 3-60 hours after training. ECGs were recorded 5 times per day, plus before and after training, cycloergometry and BIA. The negative correlation between AI and blood markers of the muscles functional status including creatine phosphokinase (r=-0.238, p < 0.008), aspartate aminotransferase (r=-0.249, p < 0.004) and uric acid (r = -0.293, p<0.004) were observed. ANI was also correlated with creatine phosphokinase (r= -0.265, p < 0.003), aspartate aminotransferase (r = -0.292, p < 0.001), lactate dehydrogenase (LDH) (r = -0.190, p < 0.050). So, when the level of muscular enzymes increases during post-exercise fatigue, AI and ANI decrease. During recovery, the level of metabolites is restored, and metabolic indices rising is registered. It can be concluded that AI and ANI adequately reflect the physiology of the muscles during recovery. One of the markers of an athlete’s physiological state is the ratio between testosterone and cortisol (TCR). TCR provides a relative indication of anabolic-catabolic balance and is considered to be more sensitive to training stress than measuring testosterone and cortisol separately. AI shows a strong negative correlation with TCR (r=-0.437, p < 0.001) and correctly represents post-exercise physiology. In order to reveal the relation between the ECG-derived metabolic indices and the state of the cardiorespiratory system, direct measurements of VO2max were carried out at various time points after training sessions. The negative correlation between AI and VO2max (r = -0.342, p < 0.001) was obtained. These data testifying VO2max rising during fatigue are controversial. However, some studies have revealed increased stroke volume after training, that agrees with findings. It is important to note that post-exercise increase in VO2max does not mean an athlete’s readiness for the next training session, because the recovery of the cardiovascular system occurs over a substantially longer period. Negative correlations registered for ANI with glycogen (r = -0.303, p < 0.001), albumin (r = -0.205, p < 0.021) and creatinine (r = -0.268, p < 0.002) reflect the dehydration status of participants after training. Correlations between designed metabolic indices and physiological parameters revealed in this study can be considered as the sufficient evidence to use these indices for assessing the state of person’s aerobic and anaerobic metabolic systems after training during fatigue, recovery and supercompensation.

Keywords: aerobic index, anaerobic index, electrocardiography, supercompensation

Procedia PDF Downloads 91
240 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays

Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal

Abstract:

Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).

Keywords: fault tolerance, FPGA, single event upset, approximate computing

Procedia PDF Downloads 163
239 Tertiary Training of Future Health Educators and Health Professionals Involved in Childhood Obesity Prevention and Treatment Strategies

Authors: Thea Werkhoven, Wayne Cotton

Abstract:

Adult and childhood rates of obesity in Australia are health concerns of high national priority, retaining epidemic status in the populations affected. Attempts to prevent further increases in prevalence of childhood obesity in the population aged below eighteen years have had varied success. A multidisciplinary approach has been used, employing strategies in schools, through established health care system usage and public health campaigns. Over the last decade a plateau in prevalence has been reached in the youth population afflicted by obesity and interest has peaked in school based strategies to prevent and treat overweight and obesity. Of interest to this study is the importance of the tertiary training of future health educators or health professionals destined to be involved in obesity prevention and treatment strategies. Health educators and health professionals are considered instrumental to the success of prevention and treatment strategies, required to possess sufficient and accurate knowledge in order to be effective in their positions. A common influence on the success of school based health promoting activities are the weight based attitudes possessed by health educators, known to be negative and biased towards overweight or obese children during training and practice. Whilst the tertiary training of future health professionals includes minimal nutrition education, there is no mandatory training in health education or nutrition for pre-service health educators in Australian tertiary institutions. This study aimed to assess the impact of a pedagogical intervention on pre-service health educators and health professionals enrolled in a health and wellbeing elective. The intervention aimed to increase nutrition knowledge and decrease weight bias and was embedded in the twelve week elective. Participants (n=98) were tertiary students at a major Australian University who were enrolled in health (47%) and non-health related degrees (53%). A quantitative survey using four valid and reliable instruments was conducted to measured nutrition knowledge, antifat attitudes and weight stereotyping attitudes at baseline and post-intervention. Scores on each instrument were compared between time points to check if they had significantly changed and to determine the effect of the intervention on attitudes and knowledge. Antifat attitudes at baseline were considered low and decreased further over the course of the intervention. Scores representing weight bias did decrease but the change was not significant. Fat stereotyping attitudes became stronger over the course of the intervention and this change was significant. Nutrition knowledge significantly improved from baseline to post-intervention. The design of the nutrition knowledge and attitude amelioration content of the intervention was semi-successful in achieving its outcomes. While the level of nutrition knowledge was improved over the course of the intervention, an unintentional increase was observed in weight based prejudice which is known to occur in interventions that employ stigma reduction methodologies. Further research is required into a structured methodology that increases level of nutrition knowledge and ameliorates weight bias at the tertiary level. In this way training provided would help prepare future health educators with the knowledge, skills and attitudes required to be effective and bias free in their practice.

Keywords: education, intervention, nutrition, obesity

Procedia PDF Downloads 173
238 Global News Coverage of the Pandemic: Towards an Ethical Framework for Media Professionalism

Authors: Anantha S. Babbili

Abstract:

This paper analyzes the current media practices dominant in global journalistic practices within the framework of world press theories of Libertarian, Authoritarian, Communist, and Social Responsibility to evaluate their efficacy in addressing their role in the coverage of the coronavirus, also known as COVID-19. The global media flows, determinants of news coverage, and international awareness and the Western view of the world will be critically analyzed within the context of the prevalent news values that underpin free press and media coverage of the world. While evaluating the global discourse paramount to a sustained and dispassionate understanding of world events, this paper proposes an ethical framework that brings clarity devoid of sensationalism, partisanship, right-wing and left-wing interpretations to a breaking and dangerous development of a pandemic. As the world struggles to contain the coronavirus pandemic with death climbing close to 6,000 from late January to mid-March, 2020, the populations of the developed as well as the developing nations are beset with news media renditions of the crisis that are contradictory, confusing and evoking anxiety, fear and hysteria. How are we to understand differing news standards and news values? What lessons do we as journalism and mass media educators, researchers, and academics learn in order to construct a better news model and structure of media practice that addresses science, health, and media literacy among media practitioners, journalists, and news consumers? As traditional media struggles to cover the pandemic to its audience and consumers, social media from which an increasing number of consumers get their news have exerted their influence both in a positive way and in a negative manner. Even as the world struggles to grasp the full significance of the pandemic, the World Health Organization (WHO) has been feverishly battling an additional challenge related to the pandemic in what it termed an 'infodemic'—'an overabundance of information, some accurate and some not, that makes it hard for people to find trustworthy sources and reliable guidance when they need it.' There is, indeed, a need for journalism and news coverage in times of pandemics that reflect social responsibility and ethos of public service journalism. Social media and high-tech information corporations, collectively termed GAMAF—Google, Apple, Microsoft, Amazon, and Facebook – can team up with reliable traditional media—newspapers, magazines, book publishers, radio and television corporates—to ease public emotions and be helpful in times of a pandemic outbreak. GAMAF can, conceivably, weed out sensational and non-credible sources of coronavirus information, exotic cures offered for sale on a quick fix, and demonetize videos that exploit peoples’ vulnerabilities at the lowest ebb. Credible news of utility delivered in a sustained, calm, and reliable manner serves people in a meaningful and helpful way. The world’s consumers of news and information, indeed, deserve a healthy and trustworthy news media – at least in the time of pandemic COVID-19. Towards this end, the paper will propose a practical model for news media and journalistic coverage during times of a pandemic.

Keywords: COVID-19, international news flow, social media, social responsibility

Procedia PDF Downloads 83
237 Applying Program Theory-Driven Approach to Design and Evaluate a Teacher Professional Development Program

Authors: S. C. Lin, M. S. Wu

Abstract:

Japanese Scholar Manabu Sato has been advocating the Learning Community, which changed Japanese fundamental education during the last three decades. It was also called a “Quiet Revolution.” Manabu Sato criticized that traditional education only focused on individual competition, exams, teacher-centered instruction, and memorization. The students lacked leaning motivation. Therefore, Manabu Sato proclaimed that learning should be a sustainable process of “constantly weaving the relationship and the meanings” by having dialogues with learning materials, with peers, and with oneself. For a long time, secondary school education in Taiwan has been focused on exams and emphasized reciting and memorizing. The incident of “giving up learning” happened to some students. Manabu Sato’s learning community program has been implemented very successfully in Japan. It is worth exploring if learning community can resolve the issue of “Escape from learning” phenomenon among secondary school students in Taiwan. This study was the first year of a two-year project. This project applied a program theory-driven approach to evaluating the impact of teachers’ professional development interventions on students’ learning by using a mix of methods, qualitative inquiry, and quasi-experimental design. The current study was to show the results of using the method of theory-driven approach to program planning to design and evaluate a teachers’ professional development program (TPDP). The Manabu Sato’s learning community theory was applied to structure all components of a 54-hour workshop. The participants consisted of seven secondary school science teachers from two schools. The research procedure was comprised of: 1) Defining the problem and assessing participants’ needs; 2) Selecting the Theoretical Framework; 3) Determining theory-based goals and objectives; 4) Designing the TPDP intervention; 5) Implementing the TPDP intervention; 6) Evaluating the TPDP intervention. Data was collected from a number of different sources, including TPDP checklist, activity responses of workshop, LC subject matter test, teachers’ e-portfolio, course design documents, and teachers’ belief survey. The major findings indicated that program design was suitable to participants. More than 70% of the participants were satisfied with program implementation. They revealed that TPDP was beneficial to their instruction and promoted their professional capacities. However, due to heavy teaching loadings during the project some participants were unable to attend all workshops. To resolve this problem, the author provided options to them by watching DVD or reading articles offered by the research team. This study also established a communication platform for participants to share their thoughts and learning experiences. The TPDP had marked impacts on participants’ teaching beliefs. They believe that learning should be a sustainable process of “constantly weaving the relationship and the meanings” by having dialogues with learning materials, with peers, and with oneself. Having learned from TPDP, they applied a “learner-centered” approach and instructional strategies to design their courses, such as learning by doing, collaborative learning, and reflective learning. To conclude, participants’ beliefs, knowledge, and skills were promoted by the program instructions.

Keywords: program theory-driven approach, learning community, teacher professional development program, program evaluation

Procedia PDF Downloads 288
236 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland

Authors: Raptis Sotirios

Abstract:

Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.

Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services

Procedia PDF Downloads 205
235 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management

Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.

Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities

Procedia PDF Downloads 38
234 Company-Independent Standardization of Timber Construction to Promote Urban Redensification of Housing Stock

Authors: Andreas Schweiger, Matthias Gnigler, Elisabeth Wieder, Michael Grobbauer

Abstract:

Especially in the alpine region, available areas for new residential development are limited. One possible solution is to exploit the potential of existing settlements. Urban redensification, especially the addition of floors to existing buildings, requires efficient, lightweight constructions with short construction times. This topic is being addressed in the five-year Alpine Building Centre. The focus of this cooperation between Salzburg University of Applied Sciences and RSA GH Studio iSPACE is on transdisciplinary research in the fields of building and energy technology, building envelopes and geoinformation, as well as the transfer of research results to industry. One development objective is a system of wood panel system construction with a high degree of prefabrication to optimize the construction quality, the construction time and the applicability for small and medium-sized enterprises. The system serves as a reliable working basis for mastering the complex building task of redensification. The technical solution is the development of an open system in timber frame and solid wood construction, which is suitable for a maximum two-story addition of residential buildings. The applicability of the system is mainly influenced by the existing building stock. Therefore, timber frame and solid timber construction are combined where necessary to bridge large spans of the existing structure while keeping the dead weight as low as possible. Escape routes are usually constructed in reinforced concrete and are located outside the system boundary. Thus, within the framework of the legal and normative requirements of timber construction, a hybrid construction method for redensification created. Component structure, load-bearing structure and detail constructions are developed in accordance with the relevant requirements. The results are directly applicable in individual cases, with the exception of the required verifications. In order to verify the practical suitability of the developed system, stakeholder workshops are held on the one hand, and the system is applied in the planning of a two-storey extension on the other hand. A company-independent construction standard offers the possibility of cooperation and bundling of capacities in order to be able to handle larger construction volumes in collaboration with several companies. Numerous further developments can take place on the basis of the system, which is under open license. The construction system will support planners and contractors from design to execution. In this context, open means publicly published and freely usable and modifiable for own use as long as the authorship and deviations are mentioned. The companies are provided with a system manual, which contains the system description and an application manual. This manual will facilitate the selection of the correct component cross-sections for the specific construction projects by means of all component and detail specifications. This presentation highlights the initial situation, the motivation, the approach, but especially the technical solution as well as the possibilities for the application. After an explanation of the objectives and working methods, the component and detail specifications are presented as work results and their application.

Keywords: redensification, SME, urban development, wood building system

Procedia PDF Downloads 77
233 Challenges in Employment and Adjustment of Academic Expatriates Based in Higher Education Institutions in the KwaZulu-Natal Province, South Africa

Authors: Thulile Ndou

Abstract:

The purpose of this study was to examine the challenges encountered in the mediation of attracting and recruiting academic expatriates who in turn encounter their own obstacles in adjusting into and settling in their host country, host academic institutions and host communities. The none-existence of literature on attraction, placement and management of academic expatriates in the South African context has been acknowledged. Moreover, Higher Education Institutions in South Africa have voiced concerns relating to delayed and prolonged recruitment and selection processes experienced in the employment process of academic expatriates. Once employed, academic expatriates should be supported and acquainted with the surroundings, the local communities as well as be assisted to establish working relations with colleagues in order to facilitate their adjustment and integration process. Hence, an employer should play a critical role in facilitating the adjustment of academic expatriates. This mixed methods study was located in four Higher Education Institutions based in the KwaZulu-Natal province, in South Africa. The explanatory sequential design approach was deployed in the study. The merits of this approach were chiefly that it employed both the quantitative and qualitative techniques of inquiry. Therefore, the study examined and interrogated its subject from a multiplicity of quantitative and qualitative vantage points, yielding a much more enriched and enriching illumination. Mixing the strengths of both the quantitative and the qualitative techniques delivered much more durable articulation and understanding of the subject. A 5-point Likert scale questionnaire was used to collect quantitative data relating to interaction adjustment, general adjustment and work adjustment from academic expatriates. One hundred and forty two (142) academic expatriates participated in the quantitative study. Qualitative data relating to employment process and support offered to academic expatriates was collected through a structured questionnaire and semi-structured interviews. A total of 48 respondents; including, line managers, human resources practitioners, and academic expatriates participated in the qualitative study. The Independent T-test, ANOVA and Descriptive Statistics were performed to analyse, interpret and make meaning of quantitative data and thematic analysis was used to analyse qualitative data. The qualitative results revealed that academic talent is sourced from outside the borders of the country because of the academic skills shortage in almost all academic disciplines especially in the disciplines associated with Science, Engineering and Accounting. However, delays in work permit application process made it difficult to finalise the recruitment and selection process on time. Furthermore, the quantitative results revealed that academic expatriates experience general and interaction adjustment challenges associated with the use of local language and understanding of local culture. However, female academic expatriates were found to be better adjusted in the two areas as compared to male academic expatriates. Moreover, significant mean differences were found between institutions suggesting that academic expatriates based in rural areas experienced adjustment challenges differently from the academic expatriates based in urban areas. The study gestured to the need for policy revisions in the area of immigration, human resources and academic administration.

Keywords: academic expatriates, recruitment and selection, interaction and general adjustment, work adjustment

Procedia PDF Downloads 279
232 The Impact of China’s Waste Import Ban on the Waste Mining Economy in East Asia

Authors: Michael Picard

Abstract:

This proposal offers to shed light on the changing legal geography of the global waste economy. Global waste recycling has become a multi-billion-dollar industry. NASDAQ predicts the emergence of a worldwide 1,296G$ waste management market between 2017 and 2022. Underlining this evolution, a new generation of preferential waste-trade agreements has emerged in the Pacific. In the last decade, Japan has concluded a series of bilateral treaties with Asian countries, and most recently with China. An agreement between Tokyo and Beijing was formalized on 7 May 2008, which forged an economic partnership on waste transfer and mining. The agreement set up International Recycling Zones, where certified recycling plants in China process industrial waste imported from Japan. Under the joint venture, Chinese companies salvage the embedded value from Japanese industrial discards, reprocess them and send them back to Japanese manufacturers, such as Mitsubishi and Panasonic. This circular economy is designed to convert surplus garbage into surplus value. Ever since the opening of Sino-Japanese eco-parks, millions of tons of plastic and e-waste have been exported from Japan to China every year. Yet, quite unexpectedly, China has recently closed its waste market to imports, jeopardizing Japan’s billion-dollar exports to China. China notified the WTO that, by the end of 2017, it would no longer accept imports of plastics and certain metals. Given China’s share of Japanese waste exports, a complete closure of China’s market would require Japan to find new uses for its recyclable industrial trash generated domestically every year. It remains to be seen how China will effectively implement its ban on waste imports, considering the economic interests at stake. At this stage, what remains to be clarified is whether China's ban on waste imports will negatively affect the recycling trade between Japan and China. What is clear, though, is the rapid transformation in the legal geography of waste mining in East-Asia. For decades, East-Asian waste trade had been tied up in an ‘ecologically unequal exchange’ between the Japanese core and the Chinese periphery. This global unequal waste distribution could be measured by the Environmental Stringency Index, which revealed that waste regulation was 39% weaker in the Global South than in Japan. This explains why Japan could legally export its hazardous plastic and electronic discards to China. The asymmetric flow of hazardous waste between Japan and China carried the colonial heritage of international law. The legal geography of waste distribution was closely associated to the imperial construction of an ecological trade imbalance between the Japanese source and the Chinese sink. Thus, China’s recent decision to ban hazardous waste imports is a sign of a broader ecological shift. As a global economic superpower, China announced to the world it would no longer be the planet’s junkyard. The policy change will have profound consequences on the global circulation of waste, re-routing global waste towards countries south of China, such as Vietnam and Malaysia. By the time the Berlin Conference takes place in May 2018, the presentation will be able to assess more accurately the effect of the Chinese ban on the transboundary movement of waste in Asia.

Keywords: Asia, ecological unequal exchange, global waste trade, legal geography

Procedia PDF Downloads 192
231 Analyzing the Heat Transfer Mechanism in a Tube Bundle Air-PCM Heat Exchanger: An Empirical Study

Authors: Maria De Los Angeles Ortega, Denis Bruneau, Patrick Sebastian, Jean-Pierre Nadeau, Alain Sommier, Saed Raji

Abstract:

Phase change materials (PCM) present attractive features that made them a passive solution for thermal comfort assessment in buildings during summer time. They show a large storage capacity per volume unit in comparison with other structural materials like bricks or concrete. If their use is matched with the peak load periods, they can contribute to the reduction of the primary energy consumption related to cooling applications. Despite these promising characteristics, they present some drawbacks. Commercial PCMs, as paraffines, offer a low thermal conductivity affecting the overall performance of the system. In some cases, the material can be enhanced, adding other elements that improve the conductivity, but in general, a design of the unit that optimizes the thermal performance is sought. The material selection is the departing point during the designing stage, and it does not leave plenty of room for optimization. The PCM melting point depends highly on the atmospheric characteristics of the building location. The selection must relay within the maximum, and the minimum temperature reached during the day. The geometry of the PCM container and the geometrical distribution of these containers are designing parameters, as well. They significantly affect the heat transfer, and therefore its phenomena must be studied exhaustively. During its lifetime, an air-PCM unit in a building must cool down the place during daytime, while the melting of the PCM occurs. At night, the PCM must be regenerated to be ready for next uses. When the system is not in service, a minimal amount of thermal exchanges is desired. The aforementioned functions result in the presence of sensible and latent heat storage and release. Hence different types of mechanisms drive the heat transfer phenomena. An experimental test was designed to study the heat transfer phenomena occurring in a circular tube bundle air-PCM exchanger. An in-line arrangement was selected as the geometrical distribution of the containers. With the aim of visual identification, the containers material and a section of the test bench were transparent. Some instruments were placed on the bench for measuring temperature and velocity. The PCM properties were also available through differential scanning calorimeter (DSC) tests. An evolution of the temperature during both cycles, melting and solidification were obtained. The results showed some phenomena at a local level (tubes) and on an overall level (exchanger). Conduction and convection appeared as the main heat transfer mechanisms. From these results, two approaches to analyze the heat transfer were followed. The first approach described the phenomena in a single tube as a series of thermal resistances, where a pure conduction controlled heat transfer was assumed in the PCM. For the second approach, the temperature measurements were used to find some significant dimensionless numbers and parameters as Stefan, Fourier and Rayleigh numbers, and the melting fraction. These approaches allowed us to identify the heat transfer phenomena during both cycles. The presence of natural convection during melting might have been stated from the influence of the Rayleigh number on the correlations obtained.

Keywords: phase change materials, air-PCM exchangers, convection, conduction

Procedia PDF Downloads 154
230 An Alternative to Problem-Based Learning in a Post-Graduate Healthcare Professional Programme

Authors: Brogan Guest, Amy Donaldson-Perrott

Abstract:

The Master’s of Physician Associate Studies (MPAS) programme at St George’s, University of London (SGUL), is an intensive two-year course that trains students to become physician associates (PAs). PAs are generalized healthcare providers who work in primary and secondary care across the UK. PA programmes face the difficult task of preparing students to become safe medical providers in two short years. Our goal is to teach students to develop clinical reasoning early on in their studies and historically, this has been done predominantly though problem-based learning (PBL). We have had an increase concern about student engagement in PBL and difficulty recruiting facilitators to maintain the low student to facilitator ratio required in PBL. To address this issue, we created ‘Clinical Application of Anatomy and Physiology (CAAP)’. These peer-led, interactive, problem-based, small group sessions were designed to facilitate students’ clinical reasoning skills. The sessions were designed using the concept of Team-Based Learning (TBL). Students were divided into small groups and each completed a pre-session quiz consisting of difficult questions devised to assess students’ application of medical knowledge. The quiz was completed in small groups and they were not permitted access of external resources. After the quiz, students worked through a series of openended, clinical tasks using all available resources. They worked at their own pace and the session was peer-led, rather than facilitator-driven. For a group of 35 students, there were two facilitators who observed the sessions. The sessions utilised an infinite space whiteboard software. Each group member was encouraged to actively participate and work together to complete the 15-20 tasks. The session ran for 2 hours and concluded with a post-session quiz, identical to the pre-session quiz. We obtained subjective feedback from students on their experience with CAAP and evaluated the objective benefit of the sessions through the quiz results. Qualitative feedback from students was generally positive with students feeling the sessions increased engagement, clinical understanding, and confidence. They found the small group aspect beneficial and the technology easy to use and intuitive. They also liked the benefit of building a resource for their future revision, something unique to CAAP compared to PBL, which out students participate in weekly. Preliminary quiz results showed improvement from pre- and post- session; however, further statistical analysis will occur once all sessions are complete (final session to run December 2022) to determine significance. As a post-graduate healthcare professional programme, we have a strong focus on self-directed learning. Whilst PBL has been a mainstay in our curriculum since its inception, there are limitations and concerns about its future in view of student engagement and facilitator availability. Whilst CAAP is not TBL, it draws on the benefits of peer-led, small group work with pre- and post- team-based quizzes. The pilot of these sessions has shown that students are engaged by CAAP, and they can make significant progress in clinical reasoning in a short amount of time. This can be achieved with a high student to facilitator ratio.

Keywords: problem based learning, team based learning, active learning, peer-to-peer teaching, engagement

Procedia PDF Downloads 60
229 Finite Element Modeling of Global Ti-6Al-4V Mechanical Behavior in Relationship with Microstructural Parameters

Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vedal, Farhad Rezai-Aria, Christine Boher

Abstract:

The global mechanical behavior of materials is strongly linked to their microstructure, especially their crystallographic texture and their grains morphology. These material aspects determine the mechanical fields character (heterogeneous or homogeneous), thus, they give to the global behavior a degree of anisotropy according the initial microstructure. For these reasons, the prediction of global behavior of materials in relationship with the microstructure must be performed with a multi-scale approach. Therefore, multi-scale modeling in the context of crystal plasticity is widely used. In this present contribution, a phenomenological elasto-viscoplastic model developed in the crystal plasticity context and finite element method are used to investigate the effects of crystallographic texture and grains sizes on global behavior of a polycrystalline equiaxed Ti-6Al-4V alloy. The constitutive equations of this model are written on local scale for each slip system within each grain while the strain and stress mechanical fields are investigated at the global scale via finite element scale transition. The beta phase of Ti-6Al-4V alloy modeled is negligible; its percent is less than 10%. Three families of slip systems of alpha phase are considered: basal and prismatic families with a burgers vector and pyramidal family with a burgers vector. The twinning mechanism of plastic strain is not observed in Ti-6Al-4V, therefore, it is not considered in the present modeling. Nine representative elementary volumes (REV) are generated with Voronoi tessellations. For each individual equiaxed grain, the own crystallographic orientation vis-à-vis the loading is taken into account. The meshing strategy is optimized in a way to eliminate the meshing effects and at the same time to allow calculating the individual grain size. The stress and strain fields are determined in each Gauss point of the mesh element. A post-treatment is used to calculate the local behavior (in each grain) and then by appropriate homogenization, the macroscopic behavior is calculated. The developed model is validated by comparing the numerical simulation results with an experimental data reported in the literature. It is observed that the present model is able to predict the global mechanical behavior of Ti-6Al-4V alloy and investigate the microstructural parameters' effects. According to the simulations performed on the generated volumes (REV), the macroscopic mechanical behavior of Ti-6Al-4V is strongly linked to the active slip systems family (prismatic, basal or pyramidal). The crystallographic texture determines which family of slip systems can be activated; therefore it gives to the plastic strain a heterogeneous character thus an anisotropic macroscopic mechanical behavior. The average grains size influences also the Ti-6Al-4V mechanical proprieties, especially the yield stress; by decreasing of the average grains size, the yield strength increases according to Hall-Petch relationship. The grains sizes' distribution gives to the strain fields considerable heterogeneity. By increasing grain sizes, the scattering in the localization of plastic strain is observed, thus, in certain areas the stress concentrations are stronger than other regions.

Keywords: microstructural parameters, multi-scale modeling, crystal plasticity, Ti-6Al-4V alloy

Procedia PDF Downloads 99
228 Rheological Properties of Thermoresponsive Poly(N-Vinylcaprolactam)-g-Collagen Hydrogel

Authors: Serap Durkut, A. Eser Elcin, Y. Murat Elcin

Abstract:

Stimuli-sensitive polymeric hydrogels have received extensive attention in the biomedical field due to their sensitivity to physical and chemical stimuli (temperature, pH, ionic strength, light, etc.). This study describes the rheological properties of a novel thermoresponsive poly(N-vinylcaprolactam)-g-collagen hydrogel. In the study, we first synthesized a facile and novel synthetic carboxyl group-terminated thermo-responsive poly(N-vinylcaprolactam)-COOH (PNVCL-COOH) via free radical polymerization. Further, this compound was effectively grafted with native collagen, by utilizing the covalent bond between the carboxylic acid groups at the end of the chains and amine groups of the collagen using cross-linking agent (EDC/NHS), forming PNVCL-g-Col. Newly-formed hybrid hydrogel displayed novel properties, such as increased mechanical strength and thermoresponsive characteristics. PNVCL-g-Col showed low critical solution temperature (LCST) at 38ºC, which is very close to the body temperature. Rheological studies determine structural–mechanical properties of the materials and serve as a valuable tool for characterizing. The rheological properties of hydrogels are described in terms of two dynamic mechanical properties: the elastic modulus G′ (also known as dynamic rigidity) representing the reversible stored energy of the system, and the viscous modulus G″, representing the irreversible energy loss. In order to characterize the PNVCL-g-Col, the rheological properties were measured in terms of the function of temperature and time during phase transition. Below the LCST, favorable interactions allowed the dissolution of the polymer in water via hydrogen bonding. At temperatures above the LCST, PNVCL molecules within PNVCL-g-Col aggregated due to dehydration, causing the hydrogel structure to become dense. When the temperature reached ~36ºC, both the G′ and G″ values crossed over. This indicates that PNVCL-g-Col underwent a sol-gel transition, forming an elastic network. Following temperature plateau at 38ºC, near human body temperature the sample displayed stable elastic network characteristics. The G′ and G″ values of the PNVCL-g-Col solutions sharply increased at 6-9 minute interval, due to rapid transformation into gel-like state and formation of elastic networks. Copolymerization with collagen leads to an increase in G′, as collagen structure contains a flexible polymer chain, which bestows its elastic properties. Elasticity of the proposed structure correlates with the number of intermolecular cross-links in the hydrogel network, increasing viscosity. However, at 8 minutes, G′ and G″ values sharply decreased for pure collagen solutions due to the decomposition of the elastic and viscose network. Complex viscosity is related to the mechanical performance and resistance opposing deformation of the hydrogel. Complex viscosity of PNVCL-g-Col hydrogel was drastically changed with temperature and the mechanical performance of PNVCL-g-Col hydrogel network increased, exhibiting lesser deformation. Rheological assessment of the novel thermo-responsive PNVCL-g-Col hydrogel, exhibited that the network has stronger mechanical properties due to both permanent stable covalent bonds and physical interactions, such as hydrogen- and hydrophobic bonds depending on temperature.

Keywords: poly(N-vinylcaprolactam)-g-collagen, thermoresponsive polymer, rheology, elastic modulus, stimuli-sensitive

Procedia PDF Downloads 219