Search results for: linear complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4918

Search results for: linear complexity

478 An Argument for Agile, Lean, and Hybrid Project Management in Museum Conservation Practice: A Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts

Authors: Maria Ledinskaya

Abstract:

This paper is part case study and part literature review. It seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation by looking at their practical application on a recent conservation project at the Sainsbury Centre for Visual Arts. The author outlines the advantages of leaner and more agile conservation practices in today’s faster, less certain, and more budget-conscious museum climate where traditional project structures are no longer as relevant or effective. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre by private collectors Michael and Joyce Morris. It was a medium-sized conservation project of moderate complexity, planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown conditions and materials, unconfirmed budget. The project was later impacted by the COVID-19 pandemic, introducing indeterminate lockdowns, budget cuts, staff changes, and the need to accommodate social distancing and remote communications. The author, then a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. The paper examines the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, including the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics. Although not intentionally planned as such, the Morris Project had a number of Agile and Lean features which were instrumental to its successful delivery. These key features are identified as distributed decision-making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point in favour of a hybrid model, which combines traditional and alternative project processes and tools to suit the specific needs of the project.

Keywords: agile project management, conservation, hybrid project management, lean project management, waterfall project management

Procedia PDF Downloads 71
477 Aerofloral Studies and Allergenicity Potentials of Dominant Atmospheric Pollen Types at Some Locations in Northwestern Nigeria

Authors: Olugbenga S. Alebiosu, Olusola H. Adekanmbi, Oluwatoyin T. Ogundipe

Abstract:

Pollen and spores have been identified as major airborne bio-particles inducing respiratory disorders such as asthma, allergic rhinitis and atopic dermatitis among hypersensitive individuals. An aeropalynological study was conducted within a one year sampling period with a view to investigating the monthly depositional rate of atmospheric pollen and spores; influence of the immediate vegetation on airborne pollen distribution; allergenic potentials of dominant atmospheric pollen types at selected study locations in Bauchi and Taraba states, Northwestern Nigeria. A tauber-like pollen trap was employed in aerosampling with the sampler positioned at a height of 5 feet above the ground, followed by a monthly collection of the recipient solution for the sampling period. The collected samples were subjected to acetolysis treatment, examined microscopically with the identification of pollen grains and spores using reference materials and published photomicrographs. Plants within the surrounding vegetation were enumerated. Crude protein contents extracted from pollen types found to be commonly dominant at both study locations; Senna siamea, Terminalia cattapa, Panicum maximum and Zea mays were used to sensitize Musmusculus. Histopathological studies of bronchi and lung sections from certain dead M.musculus in the test groups was conducted. Blood samples were collected from the pre-orbital vein of M.musculus and processed for serological and haematological (differential and total white blood cell counts) studies. ELISA was used in determining the levels of serological parameters: IgE and cytokines (TNF-, IL-5, and IL-13). Statistical significance was observed in the correlation between the levels of serological and haematological parameters elicited by each test group, differences between the levels of serological and haematological parameters elicited by each test group and those of the control, as well as at varying sensitization periods. The results from this study revealed dominant airborne pollen types across the study locations; Syzygiumguineense, Tridaxprocumbens, Elaeisguineensis, Mimosa sp., Borreria sp., Terminalia sp., Senna sp. and Poaceae. Nephrolepis sp., Pteris sp. and a trilete fern also produced spores. This study also revealed that some of the airborne pollen types were produced by local plants at the study locations. Bronchi sections of M.musculus after first and second sensitizations, as well as lung section after first sensitization with Senna siamea, showed areas of necrosis. Statistical significance was recorded in the correlation between the levels of some serological and haematological parameters produced by each test group and those of the control, as well as at certain sensitization periods. The study revealed some candidate pollen allergens at the study locations allergy sufferers and also established a complexity of interaction between immune cells, IgE and cytokines at varied periods of mice sensitization and forming a paradigm of human immune response to different pollen allergens. However, it is expedient that further studies should be conducted on these candidate pollen allergens for their allergenicity potential in humans within their immediate environment.

Keywords: airborne, hypersensitive, mus musculus, pollen allergens, respiratory, tauber-like

Procedia PDF Downloads 134
476 Using Photogrammetric Techniques to Map the Mars Surface

Authors: Ahmed Elaksher, Islam Omar

Abstract:

For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.

Keywords: mars, photogrammetry, MOLA, HiRISE

Procedia PDF Downloads 57
475 A Resilience-Based Approach for Assessing Social Vulnerability in New Zealand's Coastal Areas

Authors: Javad Jozaei, Rob G. Bell, Paula Blackett, Scott A. Stephens

Abstract:

In the last few decades, Social Vulnerability Assessment (SVA) has been a favoured means in evaluating the susceptibility of social systems to drivers of change, including climate change and natural disasters. However, the application of SVA to inform responsive and practical strategies to deal with uncertain climate change impacts has always been challenging, and typically agencies resort back to conventional risk/vulnerability assessment. These challenges include complex nature of social vulnerability concepts which influence its applicability, complications in identifying and measuring social vulnerability determinants, the transitory social dynamics in a changing environment, and unpredictability of the scenarios of change that impacts the regime of vulnerability (including contention of when these impacts might emerge). Research suggests that the conventional quantitative approaches in SVA could not appropriately address these problems; hence, the outcomes could potentially be misleading and not fit for addressing the ongoing uncertain rise in risk. The second phase of New Zealand’s Resilience to Nature’s Challenges (RNC2) is developing a forward-looking vulnerability assessment framework and methodology that informs the decision-making and policy development in dealing with the changing coastal systems and accounts for complex dynamics of New Zealand’s coastal systems (including socio-economic, environmental and cultural). Also, RNC2 requires the new methodology to consider plausible drivers of incremental and unknowable changes, create mechanisms to enhance social and community resilience; and fits the New Zealand’s multi-layer governance system. This paper aims to analyse the conventional approaches and methodologies in SVA and offer recommendations for more responsive approaches that inform adaptive decision-making and policy development in practice. The research adopts a qualitative research design to examine different aspects of the conventional SVA processes, and the methods to achieve the research objectives include a systematic review of the literature and case study methods. We found that the conventional quantitative, reductionist and deterministic mindset in the SVA processes -with a focus the impacts of rapid stressors (i.e. tsunamis, floods)- show some deficiencies to account for complex dynamics of social-ecological systems (SES), and the uncertain, long-term impacts of incremental drivers. The paper will focus on addressing the links between resilience and vulnerability; and suggests how resilience theory and its underpinning notions such as the adaptive cycle, panarchy, and system transformability could address these issues, therefore, influence the perception of vulnerability regime and its assessment processes. In this regard, it will be argued that how a shift of paradigm from ‘specific resilience’, which focuses on adaptive capacity associated with the notion of ‘bouncing back’, to ‘general resilience’, which accounts for system transformability, regime shift, ‘bouncing forward’, can deliver more effective strategies in an era characterised by ongoing change and deep uncertainty.

Keywords: complexity, social vulnerability, resilience, transformation, uncertain risks

Procedia PDF Downloads 101
474 Finite Element Analysis of Human Tarsals, Meta Tarsals and Phalanges for Predicting probable location of Fractures

Authors: Irfan Anjum Manarvi, Fawzi Aljassir

Abstract:

Human bones have been a keen area of research over a long time in the field of biomechanical engineering. Medical professionals, as well as engineering academics and researchers, have investigated various bones by using medical, mechanical, and materials approaches to discover the available body of knowledge. Their major focus has been to establish properties of these and ultimately develop processes and tools either to prevent fracture or recover its damage. Literature shows that mechanical professionals conducted a variety of tests for hardness, deformation, and strain field measurement to arrive at their findings. However, they considered these results accuracy to be insufficient due to various limitations of tools, test equipment, difficulties in the availability of human bones. They proposed the need for further studies to first overcome inaccuracies in measurement methods, testing machines, and experimental errors and then carry out experimental or theoretical studies. Finite Element analysis is a technique which was developed for the aerospace industry due to the complexity of design and materials. But over a period of time, it has found its applications in many other industries due to accuracy and flexibility in selection of materials and types of loading that could be theoretically applied to an object under study. In the past few decades, the field of biomechanical engineering has also started to see its applicability. However, the work done in the area of Tarsals, metatarsals and phalanges using this technique is very limited. Therefore, present research has been focused on using this technique for analysis of these critical bones of the human body. This technique requires a 3-dimensional geometric computer model of the object to be analyzed. In the present research, a 3d laser scanner was used for accurate geometric scans of individual tarsals, metatarsals, and phalanges from a typical human foot to make these computer geometric models. These were then imported into a Finite Element Analysis software and a length refining process was carried out prior to analysis to ensure the computer models were true representatives of actual bone. This was followed by analysis of each bone individually. A number of constraints and load conditions were applied to observe the stress and strain distributions in these bones under the conditions of compression and tensile loads or their combination. Results were collected for deformations in various axis, and stress and strain distributions were observed to identify critical locations where fracture could occur. A comparative analysis of failure properties of all the three types of bones was carried out to establish which of these could fail earlier which is presented in this research. Results of this investigation could be used for further experimental studies by the academics and researchers, as well as industrial engineers, for development of various foot protection devices or tools for surgical operations and recovery treatment of these bones. Researchers could build up on these models to carryout analysis of a complete human foot through Finite Element analysis under various loading conditions such as walking, marching, running, and landing after a jump etc.

Keywords: tarsals, metatarsals, phalanges, 3D scanning, finite element analysis

Procedia PDF Downloads 329
473 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 227
472 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices

Authors: Alena Kulikova, Tatjana Kanonire

Abstract:

Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.

Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing

Procedia PDF Downloads 80
471 Storm-Runoff Simulation Approaches for External Natural Catchments of Urban Sewer Systems

Authors: Joachim F. Sartor

Abstract:

According to German guidelines, external natural catchments are greater sub-catchments without significant portions of impervious areas, which possess a surface drainage system and empty in a sewer network. Basically, such catchments should be disconnected from sewer networks, particularly from combined systems. If this is not possible due to local conditions, their flow hydrographs have to be considered at the design of sewer systems, because the impact may be significant. Since there is a lack of sufficient measurements of storm-runoff events for such catchments and hence verified simulation methods to analyze their design flows, German standards give only general advices and demands special considerations in such cases. Compared to urban sub-catchments, external natural catchments exhibit greatly different flow characteristics. With increasing area size their hydrological behavior approximates that of rural catchments, e.g. sub-surface flow may prevail and lag times are comparable long. There are few observed peak flow values and simple (mostly empirical) approaches that are offered by literature for Central Europe. Most of them are at least helpful to crosscheck results that are achieved by simulation lacking calibration. Using storm-runoff data from five monitored rural watersheds in the west of Germany with catchment areas between 0.33 and 1.07 km2 , the author investigated by multiple event simulation three different approaches to determine the rainfall excess. These are the modified SCS variable run-off coefficient methods by Lutz and Zaiß as well as the soil moisture model by Ostrowski. Selection criteria for storm events from continuous precipitation data were taken from recommendations of M 165 and the runoff concentration method (parallel cascades of linear reservoirs) from a DWA working report to which the author had contributed. In general, the two run-off coefficient methods showed results that are of sufficient accuracy for most practical purposes. The soil moisture model showed no significant better results, at least not to such a degree that it would justify the additional data collection that its parameter determination requires. Particularly typical convective summer events after long dry periods, that are often decisive for sewer networks (not so much for rivers), showed discrepancies between simulated and measured flow hydrographs.

Keywords: external natural catchments, sewer network design, storm-runoff modelling, urban drainage

Procedia PDF Downloads 151
470 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams

Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin

Abstract:

Fire incidents have been steadily increased over the last year according to national emergency management agency of South Korea. Even though most of the fire incidents with property damage have been occurred in building, rehabilitation has not been properly done with consideration of structure safety. Therefore, this study aims at evaluating rehabilitation effects on fire damaged normal strength concrete beams through experiments and finite element analyses. For the experiments, reinforced concrete beams were fabricated having designed concrete strength of 21 MPa. Two different cover thicknesses were used as 40 mm and 50 mm. After cured, the fabricated beams were heated for 1hour or 2hours according to ISO-834 standard time-temperature curve. Rehabilitation was done by removing the damaged part of cover thickness and filling polymeric mortar into the removed part. Both fire damaged beams and rehabilitated beams were tested with four point loading system to observe structural behaviors and the rehabilitation effect. To verify the experiment, finite element (FE) models for structural analysis were generated using commercial software ABAQUS 6.10-3. For the rehabilitated beam models, integrated temperature-structural analyses were performed in advance to obtain geometries of the fire damaged beams. In addition to the fire damaged beam models, rehabilitated part was added with material properties of polymeric mortar. Three dimensional continuum brick elements were used for both temperature and structural analyses. The same loading and boundary conditions as experiments were implemented to the rehabilitated beam models and non-linear geometrical analyses were performed. Test results showed that maximum loads of the rehabilitated beams were 8~10% higher than those of the non-rehabilitated beams and even 1~6 % higher than those of the non-fire damaged beam. Stiffness of the rehabilitated beams were also larger than that of non-rehabilitated beams but smaller than that of the non-fire damaged beams. In addition, predicted structural behaviors from the analyses also showed good rehabilitation effect and the predicted load-deflection curves were similar to the experimental results. From this study, both experiments and analytical results demonstrated good rehabilitation effect on the fire damaged normal strength concrete beams. For the further, the proposed analytical method can be used to predict structural behaviors of rehabilitated and fire damaged concrete beams accurately without suffering from time and cost consuming experimental process.

Keywords: fire, normal strength concrete, rehabilitation, reinforced concrete beam

Procedia PDF Downloads 508
469 Cr (VI) Adsorption on Ce0.25Zr0.75O2.nH2O-Kinetics and Thermodynamics

Authors: Carlos Alberto Rivera-corredor, Angie Dayana Vargas-Ceballos, Edison Gilpavas, Izabela Dobrosz-Gómez, Miguel Ángel Gómez-García

Abstract:

Hexavalent chromium, Cr (VI) is present in the effluents from different industries such as electroplating, mining, leather tanning, etc. This compound is of great academic and industrial concern because of its toxic and carcinogenic behavior. Its dumping to both environmental and public health for animals and humans causes serious problems in water sources. The amount of Cr (VI) in industrial wastewaters ranges from 0.5 to 270,000 mgL-1. According to the Colombian standard for water quality (NTC-813-2010), the maximum allowed concentration for the Cr (VI) in drinking water is 0.05 mg L-1. To comply with this limit, it is essential that industries treat their effluent to reduce the Cr (VI) to acceptable levels. Numerous methods have been reported for the treatment removing metal ions from aqueous solutions such as: reduction, ion exchange, electrodialysis, etc. Adsorption has become a promising method for the purification of metal ions in water, since its application corresponds with an economic and efficient technology. The absorbent selection and the kinetic and thermodynamic study of the adsorption conditions are key to the development of a suitable adsorption technology. The Ce0.25Zr0.75O2.nH2O presents higher adsorption capacity between a series of hydrated mixed oxides Ce1-xZrxO2 (x = 0, 0.25, 0.5, 0.75, 1). This work presents the kinetic and thermodynamic study of Cr (VI) adsorption on Ce0.25Zr0.75O2.nH2O. Experiments were performed under the following experimental conditions: initial Cr (VI) concentration = 25, 50 and 100 mgL-1, pH = 2, adsorbent charge = 4 gL-1, stirring time = 60 min, temperature=20, 28 and 40 °C. The Cr (VI) concentration was spectrophotometrically estimated by the method of difenilcarbazide with monitoring the absorbance at 540 nm. The Cr (VI) adsorption over hydrated Ce0.25Zr0.75O2.nH2O models was analyzed using pseudo-first and pseudo-second order kinetics. The Langmuir and Freundlich models were used to model the experimental data. The convergence between the experimental values and those predicted by the model, is expressed as a linear regression correlation coefficient (R2) and was employed as the model selection criterion. The adsorption process followed the pseudo-second order kinetic model and obeyed the Langmuir isotherm model. The thermodynamic parameters were calculated as: ΔH°=9.04 kJmol-1,ΔS°=0.03 kJmol-1 K-1, ΔG°=-0.35 kJmol-1 and indicated the endothermic and spontaneous nature of the adsorption process, governed by physisorption interactions.

Keywords: adsorption, hexavalent chromium, kinetics, thermodynamics

Procedia PDF Downloads 299
468 Effects on Inflammatory Biomarkers and Respiratory Mechanics in Laparoscopic Bariatric Surgery: Desflurane vs. Total Intravenous Anaesthesia with Propofol

Authors: L. Kashyap, S. Jha, D. Shende, V. K. Mohan, P. Khanna, A. Aravindan, S. Kashyap, L. Singh, S. Aggarwal

Abstract:

Obesity is associated with a chronic inflammatory state. During surgery, there is an interplay between anaesthetic and surgical stress vis-a-vis the already present complex immune state. Moreover, the postoperative period is dictated by inflammation, which is crucial for wound healing and regeneration. An excess of inflammatory response might hamper recovery besides increasing the risk for infection and complications. There is definite evidence of the immunosuppressive role of inhaled anaesthetic agents. This immune modulation may be brought into effect directly by influencing the innate and adaptive immunity cells. The effects of propofol on immune mechanisms in has been widely elucidated because of its popularity. It reduces superoxide generation, elastase release, and chemotaxis. However, there is no unequivocal proof of one’s superiority over the other. Hence, an anaesthetic regimen with lesser inflammatory potential and specific to the obese patient is needed. OBESITA trial protocol (2019) by Sousa and co-workers in progress aims to test the hypothesis that anaesthesia with sevoflurane results in a weaker proinflammatory response compared to propofol, as evidenced by lower IL-6 and other biomarkers and an increased macrophage differentiation into M2 phenotype in adipose tissue. IL-6 was used as the objective parameter to evaluate inflammation as it is regulated by both surgery and anesthesia. It is the most sensitive marker of the inflammatory response to tissue damage since it is released within minutes by blood leukocytes. We hypothesized that maintenance of anaesthesia with propofol would lead to less inflammation than that with desflurane. Aims: The effect of two anaesthetic techniques, total intravenous anaesthesia (TIVA) with propofol and desflurane, on surgical stress response was evaluated. The primary objective was to compare serum interleukin-6 (IL-6) levels before and after surgery. Methods: In this prospective single-blinded randomized controlled trial undertaken, 30 obese patients (BMI>30 kg/m2) undergoing laparoscopic bariatric surgery under general anaesthesia were recruited. Patients were randomized to receive desflurane or TIVA using a target-controlled infusion for maintenance of anaesthesia. As a marker of inflammation, pre-and post-surgery IL-6 levels were compared. Results: After surgery, IL-6 levels increased significantly in both groups. The rise in IL-6 was less with TIVA than with desflurane; however, it did not reach significance. IL-6 rise post-surgery correlated positively with the complexity of procedure and duration of surgery and anaesthesia, rather than anaesthetic technique. Both groups did not differ in terms of intra-operative hemodynamic and respiratory variables, time to awakening, postoperative pulmonary complications, and duration of hospital stay. The incidence of nausea was significantly higher with desflurane than with TIVA. Conclusion: Inflammatory response did not differ as a function of anaesthetic technique when propofol and desflurane were compared. Also, patient and surgical variables dictated post-operative inflammation more than the anaesthetic factors. Further, larger sample size is needed to confirm or refute these findings.

Keywords: bariatric, biomarkers, inflammation, laparoscopy

Procedia PDF Downloads 123
467 Relationship Between Brain Entropy Patterns Estimated by Resting State fMRI and Child Behaviour

Authors: Sonia Boscenco, Zihan Wang, Euclides José de Mendoça Filho, João Paulo Hoppe, Irina Pokhvisneva, Geoffrey B.C. Hall, Michael J. Meaney, Patricia Pelufo Silveira

Abstract:

Entropy can be described as a measure of the number of states of a system, and when used in the context of physiological time-based signals, it serves as a measure of complexity. In functional connectivity data, entropy can account for the moment-to-moment variability that is neglected in traditional functional magnetic resonance imaging (fMRI) analyses. While brain fMRI resting state entropy has been associated with some pathological conditions like schizophrenia, no investigations have explored the association between brain entropy measures and individual differences in child behavior in healthy children. We describe a novel exploratory approach to evaluate brain fMRI resting state data in two child cohorts, and MAVAN (N=54, 4.5 years, 48% males) and GUSTO (N = 206, 4.5 years, 48% males) and its associations to child behavior, that can be used in future research in the context of child exposures and long-term health. Following rs-fMRI data pre-processing and Shannon entropy calculation across 32 network regions of interest to acquire 496 unique functional connections, partial correlation coefficient analysis adjusted for sex was performed to identify associations between entropy data and Strengths and Difficulties questionnaire in MAVAN and Child Behavior Checklist domains in GUSTO. Significance was set at p < 0.01, and we found eight significant associations in GUSTO. Negative associations were found between two frontoparietal regions and cerebellar posterior and oppositional defiant problems, (r = -0.212, p = 0.006) and (r = -0.200, p = 0.009). Positive associations were identified between somatic complaints and four default mode connections: salience insula (r = 0.202, p < 0.01), dorsal attention intraparietal sulcus (r = 0.231, p = 0.003), language inferior frontal gyrus (r = 0.207, p = 0.008) and language posterior superior temporal gyrus (r = 0.210, p = 0.008). Positive associations were also found between insula and frontoparietal connection and attention deficit / hyperactivity problems (r = 0.200, p < 0.01), and insula – default mode connection and pervasive developmental problems (r = 0.210, p = 0.007). In MAVAN, ten significant associations were identified. Two positive associations were found = with prosocial scores: the salience prefrontal cortex and dorsal attention connection (r = 0.474, p = 0.005) and the salience supramarginal gyrus and dorsal attention intraparietal sulcus (r = 0.447, p = 0.008). The insula and prefrontal connection were negatively associated with peer problems (r = -0.437, p < 0.01). Conduct problems were negatively associated with six separate connections, the left salience insula and right salience insula (r = -0.449, p = 0.008), left salience insula and right salience supramarginal gyrus (r = -0.512, p = 0.002), the default mode and visual network (r = -0.444, p = 0.009), dorsal attention and language network (r = -0.490, p = 0.003), and default mode and posterior parietal cortex (r = -0.546, p = 0.001). Entropy measures of resting state functional connectivity can be used to identify individual differences in brain function that are correlated with variation in behavioral problems in healthy children. Further studies applying this marker into the context of environmental exposures are warranted.

Keywords: child behaviour, functional connectivity, imaging, Shannon entropy

Procedia PDF Downloads 202
466 Nature as a Human Health Asset: An Extensive Review

Authors: C. Sancho Salvatierra, J. M. Martinez Nieto, R. García Gonzalez-Gordon, M. I. Martinez Bellido

Abstract:

Introduction: Nature could act as an asset for human health protecting against possible diseases and promoting the state of both physical and mental health. Goals: This paper aims to determine which natural elements present evidence that show positive influence on human health, on which particular aspects and how. It also aims to determine the best biomarkers to measure such influence. Method: A systematic literature review was carried out. First, a general free text search was performed in databases, such as Scopus, PubMed or PsychInfo. Secondly, a specific search was performed combining keywords in order of increasing complexity. Also the Snowballing technique was used and it was consulted in the CSIC’s (The Spanish National Research Council). Databases: Of the 130 articles obtained and reviewed, 80 referred to natural elements that influenced health. These 80 articles were classified and tabulated according to the nature elements found, the health aspects studied, the health measurement parameters used and the measurement techniques used. In this classification the results of the studies were codified according to whether they were positive, negative or neutral both for the elements of nature and for the aspects of health studied. Finally, the results of the 80 selected studies were summarized and categorized according to the elements of nature that showed the greatest positive influence on health and the biomarkers that had shown greater reliability to measure said influence. Results: Of the 80 articles studied, 24 (30.0%) were reviews and 56 (70.0%) were original research articles. Among the 24 reviews, 18 (75%) found positive results of natural elements on health, and 6 (25%) both positive and negative effects. Of the 56 original articles, 47 (83.9%) showed positive results, 3 (5.4%) both positive and negative, 4 (7.1%) negative effects, and 2 (3.6%) found no effects. The results reflect positive effects of different elements of nature on the following pathologies: diabetes, high blood pressure, stress, attention deficit hyperactivity disorder, psychotic, anxiety and affective disorders. They also show positive effects on the following areas: immune system, social interaction, recovery after illness, mood, decreased aggressiveness, concentrated attention, cognitive performance, restful sleep, vitality and sense of well-being. Among the elements of nature studied, those that show the greatest positive influence on health are forest immersion, natural views, daylight, outdoor physical activity, active transport, vegetation biodiversity, natural sounds and the green residences. As for the biomarkers used that show greater reliability to measure the effects of natural elements are the levels of cortisol (both in blood and saliva), vitamin D levels, serotonin and melatonin, blood pressure, heart rate, muscle tension and skin conductance. Conclusions: Nature is an asset for health, well-being and quality of life. Awareness programs, education and health promotion are needed based on the elements that nature brings us, which in turn generate proactive attitudes in the population towards the protection and conservation of nature. The studies related to this subject in Spain are very scarce. Aknowledgements. This study has been promoted and partially financed by the Environmental Foundation Jaime González-Gordon.

Keywords: health, green areas, nature, well-being

Procedia PDF Downloads 277
465 Stakeholder Mapping and Requirements Identification for Improving Traceability in the Halal Food Supply Chain

Authors: Laila A. H. F. Dashti, Tom Jackson, Andrew West, Lisa Jackson

Abstract:

Traceability systems are important in the agri-food and halal food sectors for monitoring ingredient movements, tracking sources, and ensuring food integrity. However, designing a traceability system for the halal food supply chain is challenging due to diverse stakeholder requirements and complex needs. Existing literature on stakeholder mapping and identifying requirements for halal food supply chains is limited. To address this gap, a pilot study was conducted to identify the objectives, requirements, and recommendations of stakeholders in the Kuwaiti halal food industry. The study collected data through semi-structured interviews with an international halal food manufacturer based in Kuwait. The aim was to gain a deep understanding of stakeholders' objectives, requirements, processes, and concerns related to the design of a traceability system in the country's halal food sector. Traceability systems are being developed and tested in the agri-food and halal food sectors due to their ability to monitor ingredient movements, track sources, and detect potential issues related to food integrity. Designing a traceability system for the halal food supply chain poses significant challenges due to diverse stakeholder requirements and the complexity of their needs (including varying food ingredients, different sources, destinations, supplier processes, certifications, etc.). Achieving a halal food traceability solution tailored to stakeholders' requirements within the supply chain necessitates prior knowledge of these needs. Although attempts have been made to address design-related issues in traceability systems, literature on stakeholder mapping and identification of requirements specific to halal food supply chains is scarce. Thus, this pilot study aims to identify the objectives, requirements, and recommendations of stakeholders in the halal food industry. The paper presents insights gained from the pilot study, which utilized semi-structured interviews to collect data from a Kuwait-based international halal food manufacturer. The objective was to gain an in-depth understanding of stakeholders' objectives, requirements, processes, and concerns pertaining to the design of a traceability system in Kuwait's halal food sector. The stakeholder mapping results revealed that government entities, food manufacturers, retailers, and suppliers are key stakeholders in Kuwait's halal food supply chain. Lessons learned from this pilot study regarding requirement capture for traceability systems include the need to streamline communication, focus on communication at each level of the supply chain, leverage innovative technologies to enhance process structuring and operations and reduce halal certification costs. The findings also emphasized the limitations of existing traceability solutions, such as limited cooperation and collaboration among stakeholders, high costs of implementing traceability systems without government support, lack of clarity regarding product routes, and disrupted communication channels between stakeholders. These findings contribute to a broader research program aimed at developing a stakeholder requirements framework that utilizes "business process modelling" to establish a unified model for traceable stakeholder requirements.

Keywords: supply chain, traceability system, halal food, stakeholders’ requirements

Procedia PDF Downloads 113
464 Segmented Pupil Phasing with Deep Learning

Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan

Abstract:

Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.

Keywords: wavefront sensing, deep learning, deployable telescope, space telescope

Procedia PDF Downloads 104
463 A Critical Analysis of How the Role of the Imam Can Best Meet the Changing Social, Cultural, and Faith-Based Needs of Muslim Families in 21st Century Britain

Authors: Christine Hough, Eddie Abbott-Halpin, Tariq Mahmood, Jessica Giles

Abstract:

This paper draws together the findings from two research studies, each undertaken with cohorts of South Asian Muslim respondents located in the North of England between 2017 and 2019. The first study, entitled Faith Family and Crime (FFC), investigated the extent to which a Muslim family’s social and health well-being is affected by a family member’s involvement in the Criminal Justice System (CJS). This study captured a range of data through a detailed questionnaire and structured interviews. The data from the interview transcripts were analysed using open coding and an application of aspects of the grounded theory approach. The findings provide clear evidence that the respondents were neither well-informed nor supported throughout the processes of the CJS, from arrest to post-sentencing. These experiences gave rise to mental and physical stress, potentially unfair sentencing, and a significant breakdown in communication within the respondents’ families. They serve to highlight a particular aspect of complexity in the current needs of those South Asian Muslim families who find themselves involved in the CJS and is closely connected to family structure, culture, and faith. The second study, referred to throughout this paper as #ImamsBritain (that provides the majority of content for this paper), explores how Imams, in their role as community faith leaders, can best address the complex – and changing - needs of South Asian Muslims families, such as those that emerged in the findings from FFC. The changing socio-economic and political climates of the last thirty or so years have brought about significant changes to the lives of Muslim families, and these have created more complex levels of social, cultural, and faith-based needs for families and individuals. As a consequence, Imams now have much greater demands made of them, and so their role has undergone far-reaching changes in response to this. The #ImamsBritain respondents identified a pressing need to develop a wider range of pastoral and counseling skills, which they saw as extending far beyond the traditional role of the Imam as a religious teacher and spiritual guide. The #ImamsBritain project was conducted with a cohort of British Imams in the North of England. Data was collected firstly through a questionnaire that related to the respondents’ training and development needs and then analysed in depth using the Delphi approach. Through Delphi, the data were scrutinized in depth using interpretative content analysis. The findings from this project reflect the respondents’ individual perceptions of the kind of training and development they need to fulfill their role in 21st Century Britain. They also provide a unique framework for constructing a professional guide for Imams in Great Britain. The discussions and critical analyses in this paper draw on the discourses of professionalization and pastoral care and relevant reports and reviews on Imam training in Europe and Canada.

Keywords: criminal justice system, faith and culture, Imams, Muslim community leadership, professionalization, South Asian family structure

Procedia PDF Downloads 138
462 Life Satisfaction of Non-Luxembourgish and Native Luxembourgish Postgraduate Students

Authors: Chrysoula Karathanasi, Senad Karavdic, Angela Odero, Michèle Baumann

Abstract:

It is not only the economic determinants that impact on life conditions, but maintaining a good level of life satisfaction (LS) may also be an important challenge currently. In Luxembourg, university students receive financial aid from the government. They are then registered at the Centre for Documentation and Information on Higher Education (CEDIES). Luxembourg is built on migration with almost half its population consisting of foreigners. It is upon this basis that our research aims to analyze the associations with mental health factors (health satisfaction, psychological quality of life, worry), perceived financial situation, career attitudes (adaptability, optimism, knowledge, planning) and LS, for non-Luxembourgish and native postgraduate students. Between 2012 and 2013, postgraduates registered at CEDIES were contacted by post and asked to participate in an online survey with either the option of English or French. The study population comprised of 644 respondents. Our statistical analysis excluded: those born abroad who had Luxembourgish citizenship, or those born in Luxembourg who did not have citizenship. Two groups were formed one consisting 147 non-Luxembourgish and the other 284 natives. A single item measured LS (1=not at all satisfied to 10=very satisfied). Bivariate tests, correlations and multiple linear regression models were used in which only significant relationships (p<0.05) were integrated. Among the two groups no differences were found between LS indicators (7.8/10 non-Luxembourgish; 8.0/10 natives) as both were higher than the European indicator of 7.2/10 (for 25-34 years). In the case of non-Luxembourgish students, they were older than natives (29.3 years vs. 26.3 years) perceived their financial situation as more difficult, and a higher percentage of their parents had an education level higher than a Bachelor's degree (father 59.2% vs 44.6% for natives; mother 51.4% vs 33.7% for natives). In addition, the father’s education was related to the LS of postgraduates and the higher was the score, the greater was the contribution to LS. Whereas for native students, when their scores of health satisfaction and career optimism were higher, their LS’ score was higher. For both groups their LS was linked to mental health-related factors, perception of their financial situation, career optimism, adaptability and planning. The higher the psychological quality of life score was, the greater the LS of postgraduates’ was. Good health and positive attitudes related to the job market enhanced their LS indicator.

Keywords: career attributes, father's education level, life satisfaction, mental health

Procedia PDF Downloads 371
461 Population Growth as the Elephant in the Room: Teachers' Perspectives and Willingness to Incorporate a Controversial Environmental Sustainability Issue in their Teaching

Authors: Iris Alkaher, Nurit Carmi

Abstract:

It is widely agreed among scientists that population growth (PG) is a major factor that drives the global environmental crisis. Many researchers recognize that explicitly addressing the impact of PG on the environment and human quality of life through education systems worldwide could play a significant role in improving understanding regarding the links between rapid PG and environmental degradation and changing perceptions, attitudes, and behaviors concerning the necessity to reduce the fertility rate. However, the issue of PG is still rarely included in schools' curricula, mainly because of its complexity and controversiality. This study aims to explore the perspectives of teachers with an academic background in environmental and sustainability education (ESEteachers) and teachers with no such background (non-ESE teachers) regarding PG as an environmental risk. The study also explores the teachers’ willingness to include PG in their teaching and identifies what predicts their inclusion of it. In this mixed-methods research study, data were collected using questionnaires and interviews. The findings portray a complex picture concerning the debate aboutPG as a major factor that drives the global environmental crisis in the Israeli context. Consistent with other countries, we found that the deep-rooted pronatalist culture in the Israeli society, as well as a robust national pronatalist agenda and policies, have a tremendous impact on the education system. Therefore, we found that an academic background in ESE had a limited impact on teachers' perceptions concerning PG as a problem and on their willingness to include it in their teaching and discuss its controversiality. Teachers' attitudes related to PG demonstrated social, cultural, and politically oriented disavowal justification regarding the negative impacts of rapid PG, identified in the literature as population-skepticism and population-fatalism. Specifically, factors such as the ongoing Israeli-Palestinian conflict, the Jewish anxiety of destruction, and the religious command to“be fruitful and multiply”influenced the perceptions of both ESE and non-ESE teachers. While these arguments are unique to the Israeli context, pronatalist policies are international. In accordance with the pronatalist policy, we also found that the absence of PG from both school curricula and the Israeli public discourse was reported by ESE and non-ESE teachers as major reasons for their disregarding PG in their teaching. Under these circumstances, the role of the education system to bring the population question to the front stage in Israel and elsewhere is more challenging. To encourage science and social studies teachers to incorporate the controversial issue of PG in their teaching and successfully confront dominant pronatalist cultures, they need strong and ongoing scaffolding and support. In accordance with scientists' agreement regarding the role of PG as a major factor that drives the global environmental crisis, we call on stakeholders and policymakers in the education system to bring the population debate into schools' curricula, the sooner, the better. And not only as part of human efforts to mitigate environmental degradation but also to use this controversial topic as a platform for shaping critical learners and responsible and active citizens who are tolerant of different people’s opinions.

Keywords: population growth, environmental and sustainability education, controversial environmental sustainability issues, pronatalism

Procedia PDF Downloads 102
460 The Meaning System of Tense: A Systemic Functional Approach

Authors: Cunyu Zhang

Abstract:

Through literature review about studies related to tense, it is found that there exist disagreements on the definition and existence of Chinese tense. Influenced by some researches on English language which regard tense as a grammatical category based on the verbal inflections of English, some Chinese researchers claim that there is no tense in Chinese language as there are no verbal inflections involved. Meanwhile, other Chinese researchers hold that Chinese still has tense although its verbs are non-inflectional based on the fact that Chinese lexical expressions can imply temporal meaning. We assume that the reasons for the above disagreements in terms of Chinese tense lie in the fact that all the previous studies prefer to view language “from the below” which means expressions of tense are the core part of these studies. However, there are about 6,000 languages with distinct expressions all over the world. Hence, if the language studies only concentrate on expressions, it must become more difficult to understand the nature of language. By contrast, functions of languages are similar; otherwise, the human beings could not communicate with each other. Therefore, we believe that it is necessary for us to have a theoretical study on Chinese tense within the framework of SFL which holds that language is a system where meaning is the core part while form is just the realization of meaning. In addition, SFL is a general linguistic providing a universal framework for languages all over the world. Therefore, based on Systemic Functional Linguistics, the paper firstly redefines tense as a deictic semantic category for describing the speaker’s temporal location of processes and relevant temporal relations. With reference to this definition, this study explores the meaning system of tense. It is proposed that tense expresses four kinds of meaning, namely interpersonal, experiential, logical and textual meanings. From the interpersonal angle, tense helps to exchange temporal information between the speaker and the listener, and the temporal information refers to the anchoring of a concerned process in the past, present or future by the speaker. From the experiential angle, tense plays a role in the temporal locating of material, mental, relational, existential, behavioral and verbal processes by the speaker. From the logical angle, tense denotes the temporal relations at the two levels of clause and clause complex, and such relations fall into simultaneity, anteriority and posteriority. From the textual angle, tense refers to the temporal relations at the level of text, and the temporal relations in question concern linear serial relations and synchronous serial relations.

Keywords: Chinese, meaning system, Systemic Functional Linguistics, tense

Procedia PDF Downloads 420
459 Learning Resources as Determinants for Improving Teaching and Learning Process in Nigerian Universities

Authors: Abdulmutallib U. Baraya, Aishatu M. Chadi, Zainab A. Aliyu, Agatha Samson

Abstract:

Learning Resources is the field of study that investigates the process of analyzing, designing, developing, implementing, and evaluating learning materials, learners, and the learning process in order to improve teaching and learning in university-level education essential for empowering students and various sectors of Nigeria’s economy to succeed in a fast-changing global economy. Innovation in the information age of the 21st century is the use of educational technologies in the classroom for instructional delivery, it involves the use of appropriate educational technologies like smart boards, computers, projectors and other projected materials to facilitate learning and improve performance. The study examined learning resources as determinants for improving the teaching and learning process in Abubakar Tafawa Balewa University (ATBU), Bauchi, Bauchi state of Nigeria. Three objectives, three research questions and three null hypotheses guided the study. The study adopted a Survey research design. The population of the study was 880 lecturers. A sample of 260 was obtained using the research advisor table for determining sampling, and 250 from the sample was proportionately selected from the seven faculties. The instrument used for data collection was a structured questionnaire. The instrument was subjected to validation by two experts. The reliability of the instrument stood at 0.81, which is reliable. The researchers, assisted by six research assistants, distributed and collected the questionnaire with a 75% return rate. Data were analyzed using mean and standard deviation to answer the research questions, whereas simple linear regression was used to test the null hypotheses at a 0.05 level of significance. The findings revealed that physical facilities and digital technology tools significantly improved the teaching and learning process. Also, consumables, supplies and equipment do not significantly improve the teaching and learning process in the faculties. It was recommended that lecturers in the various faculties should strengthen and sustain the use of digital technology tools, and there is a need to strive and continue to properly maintain the available physical facilities. Also, the university management should, as a matter of priority, continue to adequately fund and upgrade equipment, consumables and supplies frequently to enhance the effectiveness of the teaching and learning process.

Keywords: education, facilities, learning-resources, technology-tools

Procedia PDF Downloads 23
458 Prioritizing Ecosystem Services for South-Central Regions of Chile: An Expert-Based Spatial Multi-Criteria Approach

Authors: Yenisleidy Martinez Martinez, Yannay Casas-Ledon, Jo Dewulf

Abstract:

The ecosystem services (ES) concept has contributed to draw attention to the benefits ecosystems generate for people and how necessary natural resources are for human well-being. The identification and prioritization of the ES constitute the first steps to undertake conservation and valuation initiatives on behalf of people. Additionally, mapping the supply of ES is a powerful tool to support decision making regarding the sustainable management of landscape and natural resources. In this context, the present study aimed to identify, prioritize and map the primary ES in Biobio and Nuble regions using a methodology that combines expert judgment, multi-attribute evaluation methods, and Geographic Information Systems (GIS). Firstly, scores about the capacity of different land use/cover types to supply ES and the importance attributed to each service were obtained from experts and stakeholders via an online survey. Afterward, the ES assessment matrix was constructed, and the weighted linear combination (WLC) method was applied to mapping the overall capacity of supply of provisioning, regulating and maintenance, and cultural services. Finally, prioritized ES for the study area were selected and mapped. The results suggest that native forests, wetlands, and water bodies have the highest supply capacities of ES, while urban and industrial areas and bare areas have a very low supply of services. On the other hand, fourteen out of twenty-nine services were selected by experts and stakeholders as the most relevant for the regions. The spatial distribution of ES has shown that the Andean Range and part of the Coastal Range have the highest ES supply capacity, mostly regulation and maintenance and cultural ES. This performance is related to the presence of native forests, water bodies, and wetlands in those zones. This study provides specific information about the most relevant ES in Biobio and Nuble according to the opinion of local stakeholders and the spatial identification of areas with a high capacity to provide services. These findings could be helpful as a reference by planners and policymakers to develop landscape management strategies oriented to preserve the supply of services in both regions.

Keywords: ecosystem services, expert judgment, mapping, multi-criteria decision making, prioritization

Procedia PDF Downloads 126
457 Short Association Bundle Atlas for Lateralization Studies from dMRI Data

Authors: C. Román, M. Guevara, P. Salas, D. Duclap, J. Houenou, C. Poupon, J. F. Mangin, P. Guevara

Abstract:

Diffusion Magnetic Resonance Imaging (dMRI) allows the non-invasive study of human brain white matter. From diffusion data, it is possible to reconstruct fiber trajectories using tractography algorithms. Our previous work consists in an automatic method for the identification of short association bundles of the superficial white matter (SWM), based on a whole brain inter-subject hierarchical clustering applied to a HARDI database. The method finds representative clusters of similar fibers, belonging to a group of subjects, according to a distance measure between fibers, using a non-linear registration (DTI-TK). The algorithm performs an automatic labeling based on the anatomy, defined by a cortex mesh parcelated with FreeSurfer software. The clustering was applied to two independent groups of 37 subjects. The clusters resulting from both groups were compared using a restrictive threshold of mean distance between each pair of bundles from different groups, in order to keep reproducible connections. In the left hemisphere, 48 reproducible bundles were found, while 43 bundles where found in the right hemisphere. An inter-hemispheric bundle correspondence was then applied. The symmetric horizontal reflection of the right bundles was calculated, in order to obtain the position of them in the left hemisphere. Next, the intersection between similar bundles was calculated. The pairs of bundles with a fiber intersection percentage higher than 50% were considered similar. The similar bundles between both hemispheres were fused and symmetrized. We obtained 30 common bundles between hemispheres. An atlas was created with the resulting bundles and used to segment 78 new subjects from another HARDI database, using a distance threshold between 6-8 mm according to the bundle length. Finally, a laterality index was calculated based on the bundle volume. Seven bundles of the atlas presented right laterality (IP_SP_1i, LO_LO_1i, Op_Tr_0i, PoC_PoC_0i, PoC_PreC_2i, PreC_SM_0i, y RoMF_RoMF_0i) and one presented left laterality (IP_SP_2i), there is no tendency of lateralization according to the brain region. Many factors can affect the results, like tractography artifacts, subject registration, and bundle segmentation. Further studies are necessary in order to establish the influence of these factors and evaluate SWM laterality.

Keywords: dMRI, hierarchical clustering, lateralization index, tractography

Procedia PDF Downloads 331
456 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.

Keywords: base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability

Procedia PDF Downloads 278
455 N-Glycosylation in the Green Microalgae Chlamydomonas reinhardtii

Authors: Pierre-Louis Lucas, Corinne Loutelier-Bourhis, Narimane Mati-Baouche, Philippe Chan Tchi-Song, Patrice Lerouge, Elodie Mathieu-Rivet, Muriel Bardor

Abstract:

N-glycosylation is a post-translational modification taking place in the Endoplasmic Reticulum and the Golgi apparatus where defined glycan features are added on protein in a very specific sequence Asn-X-Thr/Ser/Cys were X can be any amino acid except proline. Because it is well-established that those N-glycans play a critical role in protein biological activity, protein half-life and that a different N-glycan structure may induce an immune response, they are very important in Biopharmaceuticals which are mainly glycoproteins bearing N-glycans. From now, most of the biopharmaceuticals are produced by mammalian cells like Chinese Hamster Ovary cells (CHO) for their N-glycosylation similar to the human, but due to the high production costs, several other species are investigated as the possible alternative system. In this purpose, the green microalgae Chlamydomonas reinhardtii was investigated as the potential production system for Biopharmaceuticals. This choice was influenced by the facts that C. reinhardtii is a well-study microalgae which is growing fast with a lot of molecular biology tools available. This organism is also producing N-glycan on its endogenous proteins. However, the analysis of the N-glycan structure of this microalgae has revealed some differences as compared to the human. Rather than in Human where the glycans are processed by key enzymes called N-acetylglucosaminyltransferase I and II (GnTI and GnTII) adding GlcNAc residue to form a GlcNAc₂Man₃GlcNAc₂ core N-glycan, C. reinhardtii lacks those two enzymes and possess a GnTI independent glycosylation pathway. Moreover, some enzymes like xylosyltransferases and methyltransferases not present in human are supposed to act on the glycans of C. reinhardtii. Furthermore, the recent structural study by mass spectrometry shows that the N-glycosylation precursor supposed to be conserved in almost all eukaryotic cells results in a linear Man₅GlcNAc₂ rather than a branched one in C. reinhardtii. In this work, we will discuss the new released MS information upon C. reinhardtii N-glycan structure and their impact on our attempt to modify the glycan in a Human manner. Two strategies will be discussed. The first one consisted in the study of Xylosyltransferase insertional mutants from the CLIP library in order to remove xyloses from the N-glycans. The second will go further in the humanization by transforming the microalgae with the exogenous gene from Toxoplasma gondii having an activity similar to GnTI and GnTII with the aim to synthesize GlcNAc₂Man₃GlcNAc₂ in C. reinhardtii.

Keywords: Chlamydomonas reinhardtii, N-glycosylation, glycosyltransferase, mass spectrometry, humanization

Procedia PDF Downloads 177
454 Satellite Multispectral Remote Sensing of Ozone Pollution

Authors: Juan Cuesta

Abstract:

Satellite observation is a fundamental component of air pollution monitoring systems, such as the large-scale Copernicus Programme. Next-generation satellite sensors, in orbit or programmed in the future, offer great potential to observe major air pollutants, such as tropospheric ozone, with unprecedented spatial and temporal coverage. However, satellite approaches developed for remote sensing of tropospheric ozone are based solely on measurements from a single instrument in a specific spectral range, either thermal infrared or ultraviolet. These methods offer sensitivity to tropospheric ozone located at the lowest at 3 or 4 km altitude above the surface, thus limiting their applications for ozone pollution analysis. Indeed, no current observation of a single spectral domain provides enough information to accurately measure ozone in the atmospheric boundary layer. To overcome this limitation, we have developed a multispectral synergism approach, called "IASI+GOME2", at the Laboratoire Interuniversitaire des Systèmes Atmosphériques (LISA) laboratory. This method is based on the synergy of thermal infrared and ultraviolet observations of respectively the Infrared Atmospheric Sounding Interferometer (IASI) and the Global Ozone Monitoring Experiment-2 (GOME-2) sensors embedded in MetOp satellites that have been in orbit since 2007. IASI+GOME2 allowed the first satellite observation of ozone plumes located between the surface and 3 km of altitude (what we call the lowermost troposphere), as it offers significant sensitivity in this layer. This represents a major advance for the observation of ozone in the lowermost troposphere and its application to air quality analysis. The ozone abundance derived by IASI+GOME2 shows a good agreement with respect to independent observations of ozone based on ozone sondes (a low mean bias, a linear correlation larger than 0.8 and a mean precision of about 16 %) around the world during all seasons. Using IASI+GOME2, lowermost tropospheric ozone pollution plumes are quantified both in terms of concentrations and also in the amounts of ozone photo-chemically produced along transport and also enabling the characterization of the ozone pollution, such as what occurred during the lockdowns linked to the COVID-19 pandemic. The current paper will show the IASI+GOME2 multispectral approach to observe the lowermost tropospheric ozone from space and an overview of several applications on different continents and at a global scale.

Keywords: ozone pollution, multispectral synergism, satellite, air quality

Procedia PDF Downloads 81
453 Instructors Willingness, Self-Efficacy Beliefs, Attitudes and Knowledge about Provisions of Instructional Accommodations for Students with Disabilities: The Case Selected Universities in Ethiopia

Authors: Abdreheman Seid Abdella

Abstract:

This study examined instructors willingness, self-efficacy beliefs, attitudes and knowledge about provisions of instructional accommodations for students with disabilities in universities. Major concepts used in this study operationally defined and some models of disability were reviewed. Questionnaires were distributed to a total of 181 instructors from four universities and quantitative data was generated. Then to analyze the data, appropriate methods of data analysis were employed. The result indicated that on average instructors had positive willingness, strong self-efficacy beliefs and positive attitudes towards providing instructional accommodations. In addition, the result showed that the majority of participants had moderate level of knowledge about provision of instructional accommodations. Concerning the relationship between instructors background variables and dependent variables, the result revealed that location of university and awareness raising training about Inclusive Education showed statistically significant relationship with all dependent variables (willingness, self-efficacy beliefs, attitudes and knowledge). On the other hand, gender and college/faculty did not show a statistically significant relationship. In addition, it was found that among the inter-correlation of dependent variables, the correlation between attitudes and willingness to provide accommodations was the strongest. Furthermore, using multiple linear regression analysis, this study also indicated that predictor variables like self-efficacy beliefs, attitudes, knowledge and teaching methodology training made statistically significant contribution to predicting the criterion willingness. Predictor variables like willingness and attitudes made statistically significant contribution to predicting self-efficacy beliefs. Predictor variables like willingness, Special Needs Education course and self-efficacy beliefs made statistically significant contribution to predict attitudes. Predictor variables like Special Needs Education courses, the location of university and willingness made statistically significant contribution to predicting knowledge. Finally, using exploratory factor analysis, this study showed that there were four components or factors each that represent the underlying constructs of willingness and self-efficacy beliefs to provide instructional accommodations items, five components for attitudes towards providing accommodations items and three components represent the underlying constructs for knowledge about provisions of instructional accommodations items. Based on the findings, recommendations were made for improving the situation of instructional accommodations in Ethiopian universities.

Keywords: willingness, self-efficacy belief, attitude, knowledge

Procedia PDF Downloads 270
452 Artificial Intelligence: Obstacles Patterns and Implications

Authors: Placide Poba-Nzaou, Anicet Tchibozo, Malatsi Galani, Ali Etkkali, Erwin Halim

Abstract:

Artificial intelligence (AI) is a general-purpose technology that is transforming many industries, working life and society by stimulating economic growth and innovation. Despite the huge potential of benefits to be generated, the adoption of AI varies from one organization to another, from one region to another, and from one industry to another, due in part to obstacles that can inhibit an organization or organizations located in a specific geographic region or operating in a specific industry from adopting AI technology. In this context, these obstacles and their implications for AI adoption from the perspective of configurational theory is important for at least three reasons: (1) understanding these obstacles is the first step in enabling policymakers and providers to make an informed decision in stimulating AI adoption (2) most studies have investigating obstacles or challenges of AI adoption in isolation with linear assumptions while configurational theory offers a holistic and multifaceted way of investigating the intricate interactions between perceived obstacles and barriers helping to assess their synergetic combination while holding assumptions of non-linearity leading to insights that would otherwise be out of the scope of studies investigating these obstacles in isolation. This study aims to pursue two objectives: (1) characterize organizations by uncovering the typical profiles of combinations of 15 internal and external obstacles that may prevent organizations from adopting AI technology, (2) assess the variation in terms of intensity of AI adoption associated with each configuration. We used data from a survey of AI adoption by organizations conducted throughout the EU27, Norway, Iceland and the UK (N=7549). Cluster analysis and discriminant analysis help uncover configurations of organizations based on the 15 obstacles, including eight external and seven internal. Second, we compared the clusters according to AI adoption intensity using an analysis of variance (ANOVA) and a Tamhane T2 post hoc test. The study uncovers three strongly separated clusters of organizations based on perceived obstacles to AI adoption. The clusters are labeled according to their magnitude of perceived obstacles to AI adoption: (1) Cluster I – High Level of perceived obstacles (N = 2449, 32.4%)(2) Cluster II – Low Level of perceived obstacles (N =1879, 24.9%) (3) Cluster III – Moderate Level of perceived obstacles (N =3221, 42.7%). The proposed taxonomy goes beyond the normative understanding of perceived obstacles to AI adoption and associated implications: it provides a well-structured and parsimonious lens that is useful for policymakers, AI technology providers, and researchers. Surprisingly, the ANOVAs revealed a “high level of perceived obstacles” cluster associated with a significantly high intensity of AI adoption.

Keywords: Artificial intelligence (AI), obstacles, adoption, taxonomy.

Procedia PDF Downloads 106
451 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 176
450 Pupils' and Teachers' Perceptions and Experiences of Welsh Language Instruction

Authors: Mirain Rhys, Kevin Smith

Abstract:

In 2017, the Welsh Government introduced an ambitious, new strategy to increase the number of Welsh speakers in Wales to 1 million by 2050. The Welsh education system is a vitally important feature of this strategy. All children attending state schools in Wales learn Welsh as a second language until the age of 16 and are assessed at General Certificate of Secondary Education (GCSE) level. In 2013, a review of Welsh second language instruction in Key Stages 3 and 4 was completed. The report identified considerable gaps in teachers’ preparation and training for teaching Welsh; poor Welsh language ethos at many schools; and a general lack of resources to support the instruction of Welsh. Recommendations were made across a number of dimensions including curriculum content, pedagogical practice, and teacher assessment, training, and resources. With a new national curriculum currently in development, this study builds on this review and provides unprecedented detail into pupils’ and teachers’ perceptions of Welsh language instruction. The current research built on data taken from an existing capacity building research project on Welsh education, the Wales multi-cohort study (WMS). Quantitative data taken from WMS surveys with over 1200 pupils in schools in Wales indicated that Welsh language lessons were the least enjoyable subject among pupils. The current research aimed to unpick pupil experiences in order to add to the policy development context. To achieve this, forty-four pupils and four teachers in three schools from the larger WMS sample participated in focus groups. Participants from years 9, 11 and 13 who had indicated positive, negative and neutral attitudes towards the Welsh language in a previous WMS survey were selected. Questions were based on previous research exploring issues including, but not limited to pedagogy, policy, assessment, engagement and (teacher) training. A thematic analysis of the focus group recordings revealed that the majority of participants held positive views around keeping the language alive but did not want to take on responsibility for its maintenance. These views were almost entirely based on their experiences of learning Welsh at school, especially in relation to their perceived lack of choice and opinions around particular lesson strategies and assessment. Analysis of teacher interviews highlighted a distinct lack of resources (materials and staff alike) compared to modern foreign languages, which had a negative impact on student motivation and attitudes. Both staff and students indicated a need for more practical, oral language instruction which could lead to Welsh being used outside the classroom. The data corroborate many of the review’s previous findings, but what makes this research distinctive is the way in which pupils poignantly address generally misguided aims for Welsh language instruction, poor pedagogical practice and a general disconnect between Welsh instruction and its daily use in their lives. These findings emphasize the complexity of incorporating the educational sector in strategies for Welsh language maintenance and the complications arising from pedagogical training, support, and resources, as well as teacher and pupil perceptions of, and attitudes towards, teaching and learning Welsh.

Keywords: bilingual education, language maintenance, language revitalisation, minority languages, Wales

Procedia PDF Downloads 112
449 Learning-Teaching Experience about the Design of Care Applications for Nursing Professionals

Authors: A. Gonzalez Aguna, J. M. Santamaria Garcia, J. L. Gomez Gonzalez, R. Barchino Plata, M. Fernandez Batalla, S. Herrero Jaen

Abstract:

Background: Computer Science is a field that transcends other disciplines of knowledge because it allows to support all kinds of physical and mental tasks. Health centres have a greater number and complexity of technological devices and the population consume and demand services derived from technology. Also, nursing education plans have included competencies related to and, even, courses about new technologies are offered to health professionals. However, nurses still limit their performance to the use and evaluation of products previously built. Objective: Develop a teaching-learning methodology for acquiring skills on designing applications for care. Methodology: Blended learning teaching with a group of graduate nurses through official training within a Master's Degree. The study sample was selected by intentional sampling without exclusion criteria. The study covers from 2015 to 2017. The teaching sessions included a four-hour face-to-face class and between one and three tutorials. The assessment was carried out by written test consisting of the preparation of an IEEE 830 Standard Specification document where the subject chosen by the student had to be a problem in the area of care. Results: The sample is made up of 30 students: 10 men and 20 women. Nine students had a degree in nursing, 20 diploma in nursing and one had a degree in Computer Engineering. Two students had a degree in nursing specialty through residence and two in equivalent recognition by exceptional way. Except for the engineer, no subject had previously received training in this regard. All the sample enrolled in the course received the classroom teaching session, had access to the teaching material through a virtual area and maintained at least one tutoring. The maximum of tutorials were three with an hour in total. Among the material available for consultation was an example of a document drawn up based on the IEEE Standard with an issue not related to care. The test to measure competence was completed by the whole group and evaluated by a multidisciplinary teaching team of two computer engineers and two nurses. Engineers evaluated the correctness of the characteristics of the document and the degree of comprehension in the elaboration of the problem and solution elaborated nurses assessed the relevance of the chosen problem statement, the foundation, originality and correctness of the proposed solution and the validity of the application for clinical practice in care. The results were of an average grade of 8.1 over 10 points, a range between 6 and 10. The selected topic barely coincided among the students. Examples of care areas selected are care plans, family and community health, delivery care, administration and even robotics for care. Conclusion: The applied methodology of learning-teaching for the design of technologies demonstrates the success in the training of nursing professionals. The role of expert is essential to create applications that satisfy the needs of end users. Nursing has the possibility, the competence and the duty to participate in the process of construction of technological tools that are going to impact in care of people, family and community.

Keywords: care, learning, nursing, technology

Procedia PDF Downloads 136