Search results for: quality of higher education
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22987

Search results for: quality of higher education

67 Classical Improvisation Facilitating Enhanced Performer-Audience Engagement and a Mutually Developing Impulse Exchange with Concert Audiences

Authors: Pauliina Haustein

Abstract:

Improvisation was part of Western classical concert culture and performers’ skill sets until early 20th century. Historical accounts, as well as recent studies, indicate that improvisatory elements in the programme may contribute specifically towards the audiences’ experience of enhanced emotional engagement during the concert. This paper presents findings from the author’s artistic practice research, which explored re-introducing improvisation to Western classical performance practice as a musician (cellist and ensemble partner/leader). In an investigation of four concert cycles, the performer-researcher sought to gain solo and chamber music improvisation techniques (both related to and independent of repertoire), conduct ensemble improvisation rehearsals, design concerts with an improvisatory approach, and reflect on interactions with audiences after each concert. Data was collected through use of reflective diary, video recordings, measurement of sound parameters, questionnaires, a focus group, and interviews. The performer’s empirical experiences and findings from audience research components were juxtaposed and interrogated to better understand the (1) rehearsal and planning processes that enable improvisatory elements to return to Western classical concert experience and (2) the emotional experience and type of engagement that occur throughout the concert experience for both performer and audience members. This informed the development of a concert model, in which a programme of solo and chamber music repertoire and improvisations were combined according to historically evidenced performance practice (including free formal solo and ensemble improvisations based on audience suggestions). Inspired by historical concert culture, where elements of risk-taking, spontaneity, and audience involvement (such as proposing themes for fantasies) were customary, this concert model invited musicians to contribute to the process personally and creatively at all stages, from programme planning, and throughout the live concert. The type of democratic, personal, creative, and empathetic collaboration that emerged, as a result, appears unique in Western classical contexts, rather finding resonance in jazz ensemble, drama, or interdisciplinary settings. The research identified features of ensemble improvisation, such as empathy, emergence, mutual engagement, and collaborative creativity, that became mirrored in audience’s responses, generating higher levels of emotional engagement, empathy, inclusivity, and a participatory, co-creative experience. It appears that duringimprovisatory moments in the concert programme, audience members started feeling more like active participants in za\\a creative, collaborative exchange and became stakeholders in a deeper phenomenon of meaning-making and narrativization. Examining interactions between all involved during the concert revealed that performer-audience impulse exchange occurred on multiple levels of awareness and seemed to build upon each other, resulting in particularly strong experiences of both performer and audience’s engagement. This impact appeared especially meaningful for audience members who were seldom concertgoers and reported little familiarity with classical music. The study found that re-introducing improvisatory elements to Western classical concert programmes has strong potential in increasing audience’s emotional engagement with the musical performance, enabling audience members to connect more personally with the individual performers, and in reaching new-to-classical-music audiences.

Keywords: artistic research, audience engagement, audience experience, classical improvisation, ensemble improvisation, emotional engagement, improvisation, improvisatory approach, musical performance, practice research

Procedia PDF Downloads 110
66 Long-Term Subcentimeter-Accuracy Landslide Monitoring Using a Cost-Effective Global Navigation Satellite System Rover Network: Case Study

Authors: Vincent Schlageter, Maroua Mestiri, Florian Denzinger, Hugo Raetzo, Michel Demierre

Abstract:

Precise landslide monitoring with differential global navigation satellite system (GNSS) is well known, but technical or economic reasons limit its application by geotechnical companies. This study demonstrates the reliability and the usefulness of Geomon (Infrasurvey Sàrl, Switzerland), a stand-alone and cost-effective rover network. The system permits deploying up to 15 rovers, plus one reference station for differential GNSS. A dedicated radio communication links all the modules to a base station, where an embedded computer automatically provides all the relative positions (L1 phase, open-source RTKLib software) and populates an Internet server. Each measure also contains information from an internal inclinometer, battery level, and position quality indices. Contrary to standard GNSS survey systems, which suffer from a limited number of beacons that must be placed in areas with good GSM signal, Geomon offers greater flexibility and permits a real overview of the whole landslide with good spatial resolution. Each module is powered with solar panels, ensuring autonomous long-term recordings. In this study, we have tested the system on several sites in the Swiss mountains, setting up to 7 rovers per site, for an 18 month-long survey. The aim was to assess the robustness and the accuracy of the system in different environmental conditions. In one case, we ran forced blind tests (vertical movements of a given amplitude) and compared various session parameters (duration from 10 to 90 minutes). Then the other cases were a survey of real landslides sites using fixed optimized parameters. Sub centimetric-accuracy with few outliers was obtained using the best parameters (session duration of 60 minutes, baseline 1 km or less), with the noise level on the horizontal component half that of the vertical one. The performance (percent of aborting solutions, outliers) was reduced with sessions shorter than 30 minutes. The environment also had a strong influence on the percent of aborting solutions (ambiguity search problem), due to multiple reflections or satellites obstructed by trees and mountains. The length of the baseline (distance reference-rover, single baseline processing) reduced the accuracy above 1 km but had no significant effect below this limit. In critical weather conditions, the system’s robustness was limited: snow, avalanche, and frost-covered some rovers, including the antenna and vertically oriented solar panels, leading to data interruption; and strong wind damaged a reference station. The possibility of changing the sessions’ parameters remotely was very useful. In conclusion, the rover network tested provided the foreseen sub-centimetric-accuracy while providing a dense spatial resolution landslide survey. The ease of implementation and the fully automatic long-term survey were timesaving. Performance strongly depends on surrounding conditions, but short pre-measures should allow moving a rover to a better final placement. The system offers a promising hazard mitigation technique. Improvements could include data post-processing for alerts and automatic modification of the duration and numbers of sessions based on battery level and rover displacement velocity.

Keywords: GNSS, GSM, landslide, long-term, network, solar, spatial resolution, sub-centimeter.

Procedia PDF Downloads 93
65 Discovering Causal Structure from Observations: The Relationships between Technophile Attitude, Users Value and Use Intention of Mobility Management Travel App

Authors: Aliasghar Mehdizadeh Dastjerdi, Francisco Camara Pereira

Abstract:

The increasing complexity and demand of transport services strains transportation systems especially in urban areas with limited possibilities for building new infrastructure. The solution to this challenge requires changes of travel behavior. One of the proposed means to induce such change is multimodal travel apps. This paper describes a study of the intention to use a real-time multi-modal travel app aimed at motivating travel behavior change in the Greater Copenhagen Region (Denmark) toward promoting sustainable transport options. The proposed app is a multi-faceted smartphone app including both travel information and persuasive strategies such as health and environmental feedback, tailoring travel options, self-monitoring, tunneling users toward green behavior, social networking, nudging and gamification elements. The prospective for mobility management travel apps to stimulate sustainable mobility rests not only on the original and proper employment of the behavior change strategies, but also on explicitly anchoring it on established theoretical constructs from behavioral theories. The theoretical foundation is important because it positively and significantly influences the effectiveness of the system. However, there is a gap in current knowledge regarding the study of mobility-management travel app with support in behavioral theories, which should be explored further. This study addresses this gap by a social cognitive theory‐based examination. However, compare to conventional method in technology adoption research, this study adopts a reverse approach in which the associations between theoretical constructs are explored by Max-Min Hill-Climbing (MMHC) algorithm as a hybrid causal discovery method. A technology-use preference survey was designed to collect data. The survey elicited different groups of variables including (1) three groups of user’s motives for using the app including gain motives (e.g., saving travel time and cost), hedonic motives (e.g., enjoyment) and normative motives (e.g., less travel-related CO2 production), (2) technology-related self-concepts (i.e. technophile attitude) and (3) use Intention of the travel app. The questionnaire items led to the formulation of causal relationships discovery to learn the causal structure of the data. Causal relationships discovery from observational data is a critical challenge and it has applications in different research fields. The estimated causal structure shows that the two constructs of gain motives and technophilia have a causal effect on adoption intention. Likewise, there is a causal relationship from technophilia to both gain and hedonic motives. In line with the findings of the prior studies, it highlights the importance of functional value of the travel app as well as technology self-concept as two important variables for adoption intention. Furthermore, the results indicate the effect of technophile attitude on developing gain and hedonic motives. The causal structure shows hierarchical associations between the three groups of user’s motive. They can be explained by “frustration-regression” principle according to Alderfer's ERG (Existence, Relatedness and Growth) theory of needs meaning that a higher level need remains unfulfilled, a person may regress to lower level needs that appear easier to satisfy. To conclude, this study shows the capability of causal discovery methods to learn the causal structure of theoretical model, and accordingly interpret established associations.

Keywords: travel app, behavior change, persuasive technology, travel information, causality

Procedia PDF Downloads 116
64 Design, Control and Implementation of 3.5 kW Bi-Directional Energy Harvester for Intelligent Green Energy Management System

Authors: P. Ramesh, Aby Joseph, Arya G. Lal, U. S. Aji

Abstract:

Integration of distributed green renewable energy sources in addition with battery energy storage is an inevitable requirement in a smart grid environment. To achieve this, an Intelligent Green Energy Management System (i-GEMS) needs to be incorporated to ensure coordinated operation between supply and load demand based on the hierarchy of Renewable Energy Sources (RES), battery energy storage and distribution grid. A bi-directional energy harvester is an integral component facilitating Intelligent Green Energy Management System (i-GEMS) and it is required to meet the technical challenges mentioned as follows: (1) capability for bi-directional mode of operation (buck/boost) (2) reduction of circuit parasitic to suppress voltage spikes (3) converter startup problem (4) high frequency magnetics (5) higher power density (6) mode transition issues during battery charging and discharging. This paper is focused to address the above mentioned issues and targeted to design, develop and implement a bi-directional energy harvester with galvanic isolation. In this work, the hardware architecture for bi-directional energy harvester rated 3.5 kW is developed with Isolated Full Bridge Boost Converter (IFBBC) as well as Dual Active Bridge (DAB) Converter configuration using modular power electronics hardware which is identical for both solar PV array and battery energy storage. In IFBBC converter, the current fed full bridge circuit is enabled and voltage fed full bridge circuit is disabled through Pulse Width Modulation (PWM) pulses for boost mode of operation and vice-versa for buck mode of operation. In DAB converter, all the switches are in active state so as to adjust the phase shift angle between primary full bridge and secondary full bridge which in turn decides the power flow directions depending on modes (boost/buck) of operation. Here, the control algorithm is developed to ensure the regulation of the common DC link voltage and maximum power extraction from the renewable energy sources depending on the selected mode (buck/boost) of operation. The circuit analysis and simulation study are conducted using PSIM 9.0 in three scenarios which are - 1.IFBBC with passive clamp, 2. IFBBC with active clamp, 3. DAB converter. In this work, a common hardware prototype for bi-directional energy harvester with 3.5 kW rating is built for IFBBC and DAB converter configurations. The power circuit is equipped with right choice of MOSFETs, gate drivers with galvanic isolation, high frequency transformer, filter capacitors, and filter boost inductor. The experiment was conducted for IFBBC converter with passive clamp under boost mode and the prototype confirmed the simulation results showing the measured efficiency as 88% at 2.5 kW output power. The digital controller hardware platform is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the bi-directional energy harvester is written in C language and developed using code composer studio. The comprehensive analyses of the power circuit design, control strategy for battery charging/discharging under buck/boost modes and comparative performance evaluation using simulation and experimental results will be presented.

Keywords: bi-directional energy harvester, dual active bridge, isolated full bridge boost converter, intelligent green energy management system, maximum power point tracking, renewable energy sources

Procedia PDF Downloads 107
63 The Prospects of Optimized KOH/Cellulose 'Papers' as Hierarchically Porous Electrode Materials for Supercapacitor Devices

Authors: Dina Ibrahim Abouelamaiem, Ana Jorge Sobrido, Magdalena Titirici, Paul R. Shearing, Daniel J. L. Brett

Abstract:

Global warming and scarcity of fossil fuels have had a radical impact on the world economy and ecosystem. The urgent need for alternative energy sources has hence elicited an extensive research for exploiting efficient and sustainable means of energy conversion and storage. Among various electrochemical systems, supercapacitors attracted significant attention in the last decade due to their high power supply, long cycle life compared to batteries and simple mechanism. Recently, the performance of these devices has drastically improved, as tuning of nanomaterials provided efficient charge and storage mechanisms. Carbon materials, in various forms, are believed to pioneer the next generation of supercapacitors due to their attractive properties that include high electronic conductivities, high surface areas and easy processing and functionalization. Cellulose has eco-friendly attributes that are feasible to replace man-made fibers. The carbonization of cellulose yields carbons, including activated carbon and graphite fibers. Activated carbons successively are the most exploited candidates for supercapacitor electrode materials that can be complemented with pseudocapacitive materials to achieve high energy and power densities. In this work, the optimum functionalization conditions of cellulose have been investigated for supercapacitor electrode materials. The precursor was treated with potassium hydroxide (KOH) at different KOH/cellulose ratios prior to the carbonization process in an inert nitrogen atmosphere at 850 °C. The chalky products were washed, dried and characterized with different techniques including transmission electron microscopy (TEM), x-ray tomography and nitrogen adsorption-desorption isotherms. The morphological characteristics and their effect on the electrochemical performances were investigated in two and three-electrode systems. The KOH/cellulose ratios of 0.5:1 and 1:1 exhibited the highest performances with their unique hierarchal porous network structure, high surface areas and low cell resistances. Both samples acquired the best results in three-electrode systems and coin cells with specific gravimetric capacitances as high as 187 F g-1 and 20 F g-1 at a current density of 1 A g-1 and retention rates of 72% and 70%, respectively. This is attributed to the morphology of the samples that constituted of a well-balanced micro-, meso- and macro-porosity network structure. This study reveals that the electrochemical performance doesn’t solely depend on high surface areas but also an optimum pore size distribution, specifically at low current densities. The micro- and meso-pore contribution to the final pore structure was found to dominate at low KOH loadings, reaching ‘equilibrium’ with macropores at the optimum KOH loading, after which macropores dictate the porous network. The wide range of pore sizes is detrimental for the mobility and penetration of electrolyte ions in the porous structures. These findings highlight the influence of various morphological factors on the double-layer capacitances and high performance rates. In addition, they open a platform for the investigation of the optimized conditions for double-layer capacitance that can be coupled with pseudocapacitive materials to yield higher energy densities and capacities.

Keywords: carbon, electrochemical performance, electrodes, KOH/cellulose optimized ratio, morphology, supercapacitor

Procedia PDF Downloads 195
62 Effects of Applying Low-Dye Taping in Performing Double-Leg Squat on Electromyographic Activity of Lower Extremity Muscles for Collegiate Basketball Players with Excessive Foot Pronation

Authors: I. M. K. Ho, S. K. Y. Chan, K. H. P. Lam, G. M. W. Tong, N. C. Y. Yeung, J. T. C. Luk

Abstract:

Low-dye taping (LDT) is commonly used for treating foot problems, such as plantar fasciitis, and supporting foot arch for runners and non-athletes patients with pes planus. The potential negative impact of pronated feet leading to tibial and femoral internal rotation via the entire kinetic chain reaction was postulated and identified. The changed lower limb biomechanics potentially leading to poor activation of hip and knee stabilizers, such as gluteus maximus and medius, may associate with higher risk of knee injuries including patellofemoral pain syndrome and ligamentous sprain in many team sports players. It is therefore speculated that foot arch correction with LDT might enhance the use of gluteal muscles. The purpose of this study was to investigate the effect of applying LDT on surface electromyographic (sEMG) activity of superior gluteus maximus (SGMax), inferior gluteus maximus (IGMax), gluteus medius (GMed) and tibialis anterior (TA) during double-leg squat. 12 male collegiate basketball players (age: 21.72.5 years; body fat: 12.43.6%; navicular drop: 13.72.7mm) with at least three years regular basketball training experience participated in this study. Participants were excluded if they had recent history of lower limb injuries, over 16.6% body fat and lesser than 10mm drop in navicular drop (ND) test. Recruited subjects visited the laboratory once for the within-subject crossover study. Maximum voluntary isometric contraction (MVIC) tests on all selected muscles were performed in randomized order followed by sEMG test on double-leg squat during LDT and non-LDT conditions in counterbalanced order. SGMax, IGMax, GMed and TA activities during the entire 2-second concentric and 2-second eccentric phases were normalized and interpreted as %MVIC. The magnitude of the difference between taped and non-taped conditions of each muscle was further assessed via standardized effect90% confidence intervals (CI) with non-clinical magnitude-based inference. Paired samples T-test showed a significant decrease (4.71.4mm) in ND (95% CI: 3.8, 5.6; p < 0.05) while no significant difference was observed between taped and non-taped conditions in sEMG tests for all muscles and contractions (p > 0.05). On top of traditional significant testing, magnitude-based inference showed possibly increase in IGMax activity (small standardized effect: 0.270.44), likely increase in GMed activity (small standardized effect: 0.340.34) and possibly increase in TA activity (small standardized effect: 0.220.29) during eccentric phase. It is speculated that the decrease of navicular drop supported by LDT application could potentially enhance the use of inferior gluteus maximus and gluteus medius especially during eccentric phase in this study. As the eccentric phase of double-leg squat is an important component of landing activities in basketball, further studies on the onset and amount of gluteal activation during jumping and landing activities with LDT are recommended. Since both hip and knee kinematics were not measured in this study, the underlying cause of the observed increase in gluteal activation during squat after LDT is inconclusive. In this regard, the investigation of relationships between LDT application, ND, hip and knee kinematics, and gluteal muscle activity during sports specific jumping and landing tasks should be focused in the future.

Keywords: flat foot, gluteus maximus, gluteus medius, injury prevention

Procedia PDF Downloads 135
61 Classification Using Worldview-2 Imagery of Giant Panda Habitat in Wolong, Sichuan Province, China

Authors: Yunwei Tang, Linhai Jing, Hui Li, Qingjie Liu, Xiuxia Li, Qi Yan, Haifeng Ding

Abstract:

The giant panda (Ailuropoda melanoleuca) is an endangered species, mainly live in central China, where bamboos act as the main food source of wild giant pandas. Knowledge of spatial distribution of bamboos therefore becomes important for identifying the habitat of giant pandas. There have been ongoing studies for mapping bamboos and other tree species using remote sensing. WorldView-2 (WV-2) is the first high resolution commercial satellite with eight Multi-Spectral (MS) bands. Recent studies demonstrated that WV-2 imagery has a high potential in classification of tree species. The advanced classification techniques are important for utilising high spatial resolution imagery. It is generally agreed that object-based image analysis is a more desirable method than pixel-based analysis in processing high spatial resolution remotely sensed data. Classifiers that use spatial information combined with spectral information are known as contextual classifiers. It is suggested that contextual classifiers can achieve greater accuracy than non-contextual classifiers. Thus, spatial correlation can be incorporated into classifiers to improve classification results. The study area is located at Wuyipeng area in Wolong, Sichuan Province. The complex environment makes it difficult for information extraction since bamboos are sparsely distributed, mixed with brushes, and covered by other trees. Extensive fieldworks in Wuyingpeng were carried out twice. The first one was on 11th June, 2014, aiming at sampling feature locations for geometric correction and collecting training samples for classification. The second fieldwork was on 11th September, 2014, for the purposes of testing the classification results. In this study, spectral separability analysis was first performed to select appropriate MS bands for classification. Also, the reflectance analysis provided information for expanding sample points under the circumstance of knowing only a few. Then, a spatially weighted object-based k-nearest neighbour (k-NN) classifier was applied to the selected MS bands to identify seven land cover types (bamboo, conifer, broadleaf, mixed forest, brush, bare land, and shadow), accounting for spatial correlation within classes using geostatistical modelling. The spatially weighted k-NN method was compared with three alternatives: the traditional k-NN classifier, the Support Vector Machine (SVM) method and the Classification and Regression Tree (CART). Through field validation, it was proved that the classification result obtained using the spatially weighted k-NN method has the highest overall classification accuracy (77.61%) and Kappa coefficient (0.729); the producer’s accuracy and user’s accuracy achieve 81.25% and 95.12% for the bamboo class, respectively, also higher than the other methods. Photos of tree crowns were taken at sample locations using a fisheye camera, so the canopy density could be estimated. It is found that it is difficult to identify bamboo in the areas with a large canopy density (over 0.70); it is possible to extract bamboos in the areas with a median canopy density (from 0.2 to 0.7) and in a sparse forest (canopy density is less than 0.2). In summary, this study explores the ability of WV-2 imagery for bamboo extraction in a mountainous region in Sichuan. The study successfully identified the bamboo distribution, providing supporting knowledge for assessing the habitats of giant pandas.

Keywords: bamboo mapping, classification, geostatistics, k-NN, worldview-2

Procedia PDF Downloads 291
60 A Case Study on Utility of 18FDG-PET/CT Scan in Identifying Active Extra Lymph Nodes and Staging of Breast Cancer

Authors: Farid Risheq, M. Zaid Alrisheq, Shuaa Al-Sadoon, Karim Al-Faqih, Mays Abdulazeez

Abstract:

Breast cancer is the most frequently diagnosed cancer worldwide, and a common cause of death among women. Various conventional anatomical imaging tools are utilized for diagnosis, histological assessment and TNM (Tumor, Node, Metastases) staging of breast cancer. Biopsy of sentinel lymph node is becoming an alternative to the axillary lymph node dissection. Advances in 18-Fluoro-Deoxi-Glucose Positron Emission Tomography/Computed Tomography (18FDG-PET/CT) imaging have facilitated breast cancer diagnosis utilizing biological trapping of 18FDG inside lesion cells, expressed as Standardized Uptake Value (SUVmax). Objective: To present the utility of 18FDG uptake PET/CT scans in detecting active extra lymph nodes and distant occult metastases for breast cancer staging. Subjects and Methods: Four female patients were presented with initially classified TNM stages of breast cancer based on conventional anatomical diagnostic techniques. 18FDG-PET/CT scans were performed one hour post 18FDG intra-venous injection of (300-370) MBq, and (7-8) bed/130sec. Transverse, sagittal, and coronal views; fused PET/CT and MIP modality were reconstructed for each patient. Results: A total of twenty four lesions in breast, extended lesions to lung, liver, bone and active extra lymph nodes were detected among patients. The initial TNM stage was significantly changed post 18FDG-PET/CT scan for each patient, as follows: Patient-1: Initial TNM-stage: T1N1M0-(stage I). Finding: Two lesions in right breast (3.2cm2, SUVmax=10.2), (1.8cm2, SUVmax=6.7), associated with metastases to two right axillary lymph nodes. Final TNM-stage: T1N2M0-(stage II). Patient-2: Initial TNM-stage: T2N2M0-(stage III). Finding: Right breast lesion (6.1cm2, SUVmax=15.2), associated with metastases to right internal mammary lymph node, two right axillary lymph nodes, and sclerotic lesions in right scapula. Final TNM-stage: T2N3M1-(stage IV). Patient-3: Initial TNM-stage: T2N0M1-(stage III). Finding: Left breast lesion (11.1cm2, SUVmax=18.8), associated with metastases to two lymph nodes in left hilum, and three lesions in both lungs. Final TNM-stage: T2N2M1-(stage IV). Patient-4: Initial TNM-stage: T4N1M1-(stage III). Finding: Four lesions in upper outer quadrant area of right breast (largest: 12.7cm2, SUVmax=18.6), in addition to one lesion in left breast (4.8cm2, SUVmax=7.1), associated with metastases to multiple lesions in liver (largest: 11.4cm2, SUV=8.0), and two bony-lytic lesions in left scapula and cervicle-1. No evidence of regional or distant lymph node involvement. Final TNM-stage: T4N0M2-(stage IV). Conclusions: Our results demonstrated that 18FDG-PET/CT scans had significantly changed the TNM stages of breast cancer patients. While the T factor was unchanged, N and M factors showed significant variations. A single session of PET/CT scan was effective in detecting active extra lymph nodes and distant occult metastases, which were not identified by conventional diagnostic techniques, and might advantageously replace bone scan, and contrast enhanced CT of chest, abdomen and pelvis. Applying 18FDG-PET/CT scan early in the investigation, might shorten diagnosis time, helps deciding adequate treatment protocol, and could improve patients’ quality of life and survival. Trapping of 18FDG in malignant lesion cells, after a PET/CT scan, increases the retention index (RI%) for a considerable time, which might help localize sentinel lymph node for biopsy using a hand held gamma probe detector. Future work is required to demonstrate its utility.

Keywords: axillary lymph nodes, breast cancer staging, fluorodeoxyglucose positron emission tomography/computed tomography, lymph nodes

Procedia PDF Downloads 284
59 Using the UK as a Case Study to Assess the Current State of Large Woody Debris Restoration as a Tool for Improving the Ecological Status of Natural Watercourses Globally

Authors: Isabelle Barrett

Abstract:

Natural watercourses provide a range of vital ecosystem services, notably freshwater provision. They also offer highly heterogeneous habitat which supports an extreme diversity of aquatic life. Exploitation of rivers, changing land use and flood prevention measures have led to habitat degradation and subsequent biodiversity loss; indeed, freshwater species currently face a disproportionate rate of extinction compared to their terrestrial and marine counterparts. Large woody debris (LWD) encompasses the trees, large branches and logs which fall into watercourses, and is responsible for important habitat characteristics. Historically, natural LWD has been removed from streams under the assumption that it is not aesthetically pleasing and is thus ecologically unfavourable, despite extensive evidence contradicting this. Restoration efforts aim to replace lost LWD in order to reinstate habitat heterogeneity. This paper aims to assess the current state of such restoration schemes for improving fluvial ecological health in the UK. A detailed review of the scientific literature was conducted alongside a meta-analysis of 25 UK-based projects involving LWD restoration. Projects were chosen for which sufficient information was attainable for analysis, covering a broad range of budgets and scales. The most effective strategies for river restoration encompass ecological success, stakeholder engagement and scientific advancement, however few projects surveyed showed sensitivity to all three; for example, only 32% of projects stated biological aims. Focus tended to be on stakeholder engagement and public approval, since this is often a key funding driver. Consequently, there is a tendency to focus on the aesthetic outcomes of a project, however physical habitat restoration does not necessarily lead to direct biodiversity increases. This highlights the significance of rivers as highly heterogeneous environments with multiple interlinked processes, and emphasises a need for a stronger scientific presence in project planning. Poor scientific rigour means monitoring is often lacking, with varying, if any, definitions of success which are rarely pre-determined. A tendency to overlook negative or neutral results was apparent, with unjustified focus often put on qualitative results. The temporal scale of monitoring is typically inadequate to facilitate scientific conclusions, with only 20% of projects surveyed reporting any pre-restoration monitoring. Furthermore, monitoring is often limited to a few variables, with biotic monitoring often fish-focussed. Due to their longer life cycles and dispersal capability, fish are usually poor indicators of environmental change, making it difficult to attribute any changes in ecological health to restoration efforts. Although the potential impact of LWD restoration may be positive, this method of restoration could simply be making short-term, small-scale improvements; without addressing the underlying symptoms of degradation, for example water quality, the issue cannot be fully resolved. Promotion of standardised monitoring for LWD projects could help establish a deeper understanding of the ecology surrounding the practice, supporting movement towards adaptive management in which scientific evidence feeds back to practitioners, enabling the design of more efficient projects with greater ecological success. By highlighting LWD, this study hopes to address the difficulties faced within river management, and emphasise the need for a more holistic international and inter-institutional approach to tackling problems associated with degradation.

Keywords: biological monitoring, ecological health, large woody debris, river management, river restoration

Procedia PDF Downloads 178
58 Understanding the Impact of Resilience Training on Cognitive Performance in Military Personnel

Authors: Haji Mohammad Zulfan Farhi Bin Haji Sulaini, Mohammad Azeezudde’en Bin Mohd Ismaon

Abstract:

The demands placed on military athletes extend beyond physical prowess to encompass cognitive resilience in high-stress environments. This study investigates the effects of resilience training on the cognitive performance of military athletes, shedding light on the potential benefits and implications for optimizing their overall readiness. In a rapidly evolving global landscape, armed forces worldwide are recognizing the importance of cognitive resilience alongside physical fitness. The study employs a mixed-methods approach, incorporating quantitative cognitive assessments and qualitative data from military athletes undergoing resilience training programs. Cognitive performance is evaluated through a battery of tests, including measures of memory, attention, decision-making, and reaction time. The participants, drawn from various branches of the military, are divided into experimental and control groups. The experimental group undergoes a comprehensive resilience training program, while the control group receives traditional physical training without a specific focus on resilience. The initial findings indicate a substantial improvement in cognitive performance among military athletes who have undergone resilience training. These improvements are particularly evident in domains such as attention and decision-making. The experimental group demonstrated enhanced situational awareness, quicker problem-solving abilities, and increased adaptability in high-stress scenarios. These results suggest that resilience training not only bolsters mental toughness but also positively impacts cognitive skills critical to military operations. In addition to quantitative assessments, qualitative data is collected through interviews and surveys to gain insights into the subjective experiences of military athletes. Preliminary analysis of these narratives reveals that participants in the resilience training program report higher levels of self-confidence, emotional regulation, and an improved ability to manage stress. These psychological attributes contribute to their enhanced cognitive performance and overall readiness. Moreover, this study explores the potential long-term benefits of resilience training. By tracking participants over an extended period, we aim to assess the durability of cognitive improvements and their effects on overall mission success. Early results suggest that resilience training may serve as a protective factor against the detrimental effects of prolonged exposure to stressors, potentially reducing the risk of burnout and psychological trauma among military athletes. This research has significant implications for military organizations seeking to optimize the performance and well-being of their personnel. The findings suggest that integrating resilience training into the training regimen of military athletes can lead to a more resilient and cognitively capable force. This, in turn, may enhance mission success, reduce the risk of injuries, and improve the overall effectiveness of military operations. In conclusion, this study provides compelling evidence that resilience training positively impacts the cognitive performance of military athletes. The preliminary results indicate improvements in attention, decision-making, and adaptability, as well as increased psychological resilience. As the study progresses and incorporates long-term follow-ups, it is expected to provide valuable insights into the enduring effects of resilience training on the cognitive readiness of military athletes, contributing to the ongoing efforts to optimize military personnel's physical and mental capabilities in the face of ever-evolving challenges.

Keywords: military athletes, cognitive performance, resilience training, cognitive enhancement program

Procedia PDF Downloads 54
57 The Use of the TRIGRS Model and Geophysics Methodologies to Identify Landslides Susceptible Areas: Case Study of Campos do Jordao-SP, Brazil

Authors: Tehrrie Konig, Cassiano Bortolozo, Daniel Metodiev, Rodolfo Mendes, Marcio Andrade, Marcio Moraes

Abstract:

Gravitational mass movements are recurrent events in Brazil, usually triggered by intense rainfall. When these events occur in urban areas, they end up becoming disasters due to the economic damage, social impact, and loss of human life. To identify the landslide-susceptible areas, it is important to know the geotechnical parameters of the soil, such as cohesion, internal friction angle, unit weight, hydraulic conductivity, and hydraulic diffusivity. The measurement of these parameters is made by collecting soil samples to analyze in the laboratory and by using geophysical methodologies, such as Vertical Electrical Survey (VES). The geophysical surveys analyze the soil properties with minimal impact in its initial structure. Statistical analysis and mathematical models of physical basis are used to model and calculate the Factor of Safety for steep slope areas. In general, such mathematical models work from the combination of slope stability models and hydrological models. One example is the mathematical model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope- Stability Model) which calculates the variation of the Factor of Safety of a determined study area. The model relies on changes in pore-pressure and soil moisture during a rainfall event. TRIGRS was written in the Fortran programming language and associates the hydrological model, which is based on the Richards Equation, with the stability model based on the principle of equilibrium limit. Therefore, the aims of this work are modeling the slope stability of Campos do Jordão with TRIGRS, using geotechnical and geophysical methodologies to acquire the soil properties. The study area is located at southern-east of Sao Paulo State in the Mantiqueira Mountains and has a historic landslide register. During the fieldwork, soil samples were collected, and the VES method applied. These procedures provide the soil properties, which were used as input data in the TRIGRS model. The hydrological data (infiltration rate and initial water table height) and rainfall duration and intensity, were acquired from the eight rain gauges installed by Cemaden in the study area. A very high spatial resolution digital terrain model was used to identify the slopes declivity. The analyzed period is from March 6th to March 8th of 2017. As results, the TRIGRS model calculates the variation of the Factor of Safety within a 72-hour period in which two heavy rainfall events stroke the area and six landslides were registered. After each rainfall, the Factor of Safety declined, as expected. The landslides happened in areas identified by the model with low values of Factor of Safety, proving its efficiency on the identification of landslides susceptible areas. This study presents a critical threshold for landslides, in which an accumulated rainfall higher than 80mm/m² in 72 hours might trigger landslides in urban and natural slopes. The geotechnical and geophysics methods are shown to be very useful to identify the soil properties and provide the geological characteristics of the area. Therefore, the combine geotechnical and geophysical methods for soil characterization and the modeling of landslides susceptible areas with TRIGRS are useful for urban planning. Furthermore, early warning systems can be developed by combining the TRIGRS model and weather forecast, to prevent disasters in urban slopes.

Keywords: landslides, susceptibility, TRIGRS, vertical electrical survey

Procedia PDF Downloads 148
56 Correlation of Unsuited and Suited 5ᵗʰ Female Hybrid III Anthropometric Test Device Model under Multi-Axial Simulated Orion Abort and Landing Conditions

Authors: Christian J. Kennett, Mark A. Baldwin

Abstract:

As several companies are working towards returning American astronauts back to space on US-made spacecraft, NASA developed a human flight certification-by-test and analysis approach due to the cost-prohibitive nature of extensive testing. This process relies heavily on the quality of analytical models to accurately predict crew injury potential specific to each spacecraft and under dynamic environments not tested. As the prime contractor on the Orion spacecraft, Lockheed Martin was tasked with quantifying the correlation of analytical anthropometric test devices (ATDs), also known as crash test dummies, against test measurements under representative impact conditions. Multiple dynamic impact sled tests were conducted to characterize Hybrid III 5th ATD lumbar, head, and neck responses with and without a modified shuttle-era advanced crew escape suit (ACES) under simulated Orion landing and abort conditions. Each ATD was restrained via a 5-point harness in a mockup Orion seat fixed to a dynamic impact sled at the Wright Patterson Air Force Base (WPAFB) Biodynamics Laboratory in the horizontal impact accelerator (HIA). ATDs were subject to multiple impact magnitudes, half-sine pulse rise times, and XZ - ‘eyeballs out/down’ or Z-axis ‘eyeballs down’ orientations for landing or an X-axis ‘eyeballs in’ orientation for abort. Several helmet constraint devices were evaluated during suited testing. Unique finite element models (FEMs) were developed of the unsuited and suited sled test configurations using an analytical 5th ATD model developed by LSTC (Livermore, CA) and deformable representations of the seat, suit, helmet constraint countermeasures, and body restraints. Explicit FE analyses were conducted using the non-linear solver LS-DYNA. Head linear and rotational acceleration, head rotational velocity, upper neck force and moment, and lumbar force time histories were compared between test and analysis using the enhanced error assessment of response time histories (EEARTH) composite score index. The EEARTH rating paired with the correlation and analysis (CORA) corridor rating provided a composite ISO score that was used to asses model correlation accuracy. NASA occupant protection subject matter experts established an ISO score of 0.5 or greater as the minimum expectation for correlating analytical and experimental ATD responses. Unsuited 5th ATD head X, Z, and resultant linear accelerations, head Y rotational accelerations and velocities, neck X and Z forces, and lumbar Z forces all showed consistent ISO scores above 0.5 in the XZ impact orientation, regardless of peak g-level or rise time. Upper neck Y moments were near or above the 0.5 score for most of the XZ cases. Similar trends were found in the XZ and Z-axis suited tests despite the addition of several different countermeasures for restraining the helmet. For the X-axis ‘eyeballs in’ loading direction, only resultant head linear acceleration and lumbar Z-axis force produced ISO scores above 0.5 whether unsuited or suited. The analytical LSTC 5th ATD model showed good correlation across multiple head, neck, and lumbar responses in both the unsuited and suited configurations when loaded in the XZ ‘eyeballs out/down’ direction. Upper neck moments were consistently the most difficult to predict, regardless of impact direction or test configuration.

Keywords: impact biomechanics, manned spaceflight, model correlation, multi-axial loading

Procedia PDF Downloads 91
55 Immunostimulatory Response of Supplement Feed in Fish against Aeromonas hydrophila

Authors: Shikha Rani, Neeta Sehgal, Vipin Kumar Verma, Om Prakash

Abstract:

Introduction: Fish is an important protein source for humans and has great economic value. Fish cultures are affected due to various anthropogenic activities that lead to bacterial and viral infections. Aeromonas hydrophila is a fish pathogenic bacterium that causes several aquaculture outbreaks throughout the world and leads to huge mortalities. In this study, plants of no commercial value were used to investigate their immunostimulatory, antioxidant, anti-inflammatory, anti-bacterial, and disease resistance potential in fish against Aeromonas hydrophila, through fish feed fortification. Methods: The plant was dried at room temperature in the shade, dissolved in methanol, and analysed for biological compounds through GC-MS/MS. DPPH, FRAP, Phenolic, and flavonoids were estimated following standardized protocols. In silico molecular docking was also performed to validate its broad-spectrum activities based on binding affinity with specific proteins. Fish were divided into four groups (n=6; total 30 in a group): Group 1, non-challenged fish (fed on a non-supplemented diet); Group 2, fish challenged with bacteria (fed on a non-supplemented diet); Group 3 and 4, fish challenged with bacteria (A. hydrophila) and fed on plant supplemented feed at 2.5% and 5%. Blood was collected from the fish on 0, 7th, 14th, 21st, and 28th days. Serum was separated for glutamic-oxaloacetic transaminase (SGOT), serum glutamic pyruvic transaminase (SGPT), alkaline phosphatase assay (ALP), lysozyme activity assay, superoxide dismutase assay (SOD), lipid peroxidation assay (LPO) and molecular parameters (including cytokine levels) were estimated through ELISA. The phagocytic activity of macrophages from the spleen and head kidney, along with quantitative analysis of immune-related genes, were analysed in different tissue samples. The digestive enzymes (Pepsin, Trypsin, and Chymotrypsin) were also measured to evaluate the effect of plant-supplemented feed on freshwater fish. Results and Discussion: GC-MS/MS analysis of a methanolic extract of plant validated the presence of key compounds having antioxidant, anti-inflammatory, anti-bacterial, anti-inflammatory, and immunomodulatory activities along with disease resistance properties. From biochemical investigations like ABTS, DPPH, and FRAP, the amount of total flavonoids, phenols, and promising binding affinities towards different proteins in molecular docking analysis helped us to realize the potential of this plant that can be used for investigation in the supplemented feed of fish. Measurement liver function tests, ALPs, oxidation-antioxidant enzyme concentrations, and immunoglobulin concentrations in the experimental groups (3 and 4) showed significant improvement as compared to the positive control group. The histopathological evaluation of the liver, spleen, and head kidney supports the biochemical findings. The isolated macrophages from the group fed on supplemented feed showed a higher percentage of phagocytosis and a phagocytic index, indicating an enhanced cell-mediated immune response. Significant improvements in digestive enzymes were also observed in fish fed on supplemented feed, even after weekly challenges with bacteria. Hence, the plant-fortified feed can be recommended as a regular feed to enhance fish immunity and disease resistance against the Aeromonas hydrophila infection after confirmation from the field trial.

Keywords: immunostimulation, antipathogen, plant fortified feed, macrophages, GC-MS/MS, in silico molecular docking

Procedia PDF Downloads 55
54 Stakeholder Engagement to Address Urban Health Systems Gaps for Migrants

Authors: A. Chandra, M. Arthur, L. Mize, A. Pomeroy-Stevens

Abstract:

Background: Lower and middle-income countries (LMICs) in Asia face rapid urbanization resulting in both economic opportunities (the urban advantage) and emerging health challenges. Urban health risks are magnified in informal settlements and include infectious disease outbreaks, inadequate access to health services, and poor air quality. Over the coming years, urban spaces in Asia will face accelerating public health risks related to migration, climate change, and environmental health. These challenges are complex and require multi-sectoral and multi-stakeholder solutions. The Building Health Cities (BHC) program is funded by the United States Agency for International Development (USAID) to work with smart city initiatives in the Asia region. BHC approaches urban health challenges by addressing policies, planning, and services through a health equity lens, with a particular focus on informal settlements and migrant communities. The program works to develop data-driven decision-making, build inclusivity through stakeholder engagement, and facilitate the uptake of appropriate technology. Methodology: The BHC program has partnered with the smart city initiatives of Indore in India, Makassar in Indonesia, and Da Nang in Vietnam. Implementing partners support municipalities to improve health delivery and equity using two key approaches: political economy analysis and participatory systems mapping. Political economy analyses evaluate barriers to collective action, including corruption, security, accountability, and incentives. Systems mapping evaluates community health challenges using a cross-sectoral approach, analyzing the impact of economic, environmental, transport, security, health system, and built environment factors. The mapping exercise draws on the experience and expertise of a diverse cohort of stakeholders, including government officials, municipal service providers, and civil society organizations. Results: Systems mapping and political economy analyses identified significant barriers for health care in migrant populations. In Makassar, migrants are unable to obtain the necessary card that entitles them to subsidized health services. This finding is being used to engage with municipal governments to mitigate the barriers that limit migrant enrollment in the public social health insurance scheme. In Indore, the project identified poor drainage of storm and wastewater in migrant settlements as a cause of poor health. Unsafe and inadequate infrastructure placed residents of these settlements at risk for both waterborne diseases and injuries. The program also evaluated the capacity of urban primary health centers serving migrant communities, identifying challenges related to their hours of service and shortages of health workers. In Da Nang, the systems mapping process has only recently begun, with the formal partnership launched in December 2019. Conclusion: This paper explores lessons learned from BHC’s systems mapping, political economy analyses, and stakeholder engagement approaches. The paper shares progress related to the health of migrants in informal settlements. Case studies feature barriers identified and mitigating steps, including governance actions, taken by local stakeholders in partner cities. The paper includes an update on ongoing progress from Indore and Makassar and experience from the first six months of program implementation from Da Nang.

Keywords: informal settlements, migration, stakeholder engagement mapping, urban health

Procedia PDF Downloads 89
53 Analyzing Spatio-Structural Impediments in the Urban Trafficscape of Kolkata, India

Authors: Teesta Dey

Abstract:

Integrated Transport development with proper traffic management leads to sustainable growth of any urban sphere. Appropriate mass transport planning is essential for the populous cities in third world countries like India. The exponential growth of motor vehicles with unplanned road network is now the common feature of major urban centres in India. Kolkata, the third largest mega city in India, is not an exception of it. The imbalance between demand and supply of unplanned transport services in this city is manifested in the high economic and environmental costs borne by the associated society. With the passage of time, the growth and extent of passenger demand for rapid urban transport has outstripped proper infrastructural planning and causes severe transport problems in the overall urban realm. Hence Kolkata stands out in the world as one of the most crisis-ridden metropolises. The urban transport crisis of this city involves severe traffic congestion, the disparity in mass transport services on changing peripheral land uses, route overlapping, lowering of travel speed and faulty implementation of governmental plans as mostly induced by rapid growth of private vehicles on limited road space with huge carbon footprint. Therefore the paper will critically analyze the extant road network pattern for improving regional connectivity and accessibility, assess the degree of congestion, identify the deviation from demand and supply balance and finally evaluate the emerging alternate transport options as promoted by the government. For this purpose, linear, nodal and spatial transport network have been assessed based on certain selected indices viz. Road Degree, Traffic Volume, Shimbel Index, Direct Bus Connectivity, Average Travel and Waiting Tine Indices, Route Variety, Service Frequency, Bus Intensity, Concentration Analysis, Delay Rate, Quality of Traffic Transmission, Lane Length Duration Index and Modal Mix. Total 20 Traffic Intersection Points (TIPs) have been selected for the measurement of nodal accessibility. Critical Congestion Zones (CCZs) are delineated based on one km buffer zones of each TIP for congestion pattern analysis. A total of 480 bus routes are assessed for identifying the deficiency in network planning. Apart from bus services, the combined effects of other mass and para transit modes, containing metro rail, auto, cab and ferry services, are also analyzed. Based on systematic random sampling method, a total of 1500 daily urban passengers’ perceptions were studied for checking the ground realities. The outcome of this research identifies the spatial disparity among the 15 boroughs of the city with severe route overlapping and congestion problem. North and Central Kolkata-based mass transport services exceed the transport strength of south and peripheral Kolkata. Faulty infrastructural condition, service inadequacy, economic loss and workers’ inefficiency are the most dominant reasons behind the defective mass transport network plan. Hence there is an urgent need to revive the extant road based mass transport system of this city by implementing a holistic management approach by upgrading traffic infrastructure, designing new roads, better cooperation among different mass transport agencies, better coordination of transport and changing land use policies, large increase in funding and finally general passengers’ awareness.

Keywords: carbon footprint, critical congestion zones, direct bus connectivity, integrated transport development

Procedia PDF Downloads 253
52 Circular Nitrogen Removal, Recovery and Reuse Technologies

Authors: Lina Wu

Abstract:

The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and threatens water quality. Nitrogen pollution control has become a global concern. The concentration of nitrogen in water is reduced by converting ammonia nitrogen, nitrate nitrogen and nitrite nitrogen into nitrogen-containing gas through biological treatment, physicochemical treatment and oxidation technology. However, some wastewater containing high ammonia nitrogen including landfill leachate, is difficult to be treated by traditional nitrification and denitrification because of its high COD content. The core process of denitrification is that denitrifying bacteria convert nitrous acid produced by nitrification into nitrite under anaerobic conditions. Still, its low-carbon nitrogen does not meet the conditions for denitrification. Many studies have shown that the natural autotrophic anammox bacteria can combine nitrous and ammonia nitrogen without a carbon source through functional genes to achieve total nitrogen removal, which is very suitable for removing nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The short-range nitrification and denitrification coupled with anaerobic ammoX ensures total nitrogen removal. It improves the removal efficiency, meeting the needs of society for an ecologically friendly and cost-effective nutrient removal treatment technology. In recent years, research has found that the symbiotic system has more water treatment advantages because this process not only helps to improve the efficiency of wastewater treatment but also allows carbon dioxide reduction and resource recovery. Microalgae use carbon dioxide dissolved in water or released through bacterial respiration to produce oxygen for bacteria through photosynthesis under light, and bacteria, in turn, provide metabolites and inorganic carbon sources for the growth of microalgae, which may lead the algal bacteria symbiotic system save most or all of the aeration energy consumption. It has become a trend to make microalgae and light-avoiding anammox bacteria play synergistic roles by adjusting the light-to-dark ratio. Microalgae in the outer layer of light particles block most of the light and provide cofactors and amino acids to promote nitrogen removal. In particular, myxoccota MYX1 can degrade extracellular proteins produced by microalgae, providing amino acids for the entire bacterial community, which helps anammox bacteria save metabolic energy and adapt to light. As a result, initiating and maintaining the process of combining dominant algae and anaerobic denitrifying bacterial communities has great potential in treating landfill leachate. Chlorella has a brilliant removal effect and can withstand extreme environments in terms of high ammonia nitrogen, high salt and low temperature. It is urgent to study whether the algal mud mixture rich in denitrifying bacteria and chlorella can greatly improve the efficiency of landfill leachate treatment under an anaerobic environment where photosynthesis is stopped. The optimal dilution concentration of simulated landfill leachate can be found by determining the treatment effect of the same batch of bacteria and algae mixtures under different initial ammonia nitrogen concentrations and making a comparison. High-throughput sequencing technology was used to analyze the changes in microbial diversity, related functional genera and functional genes under optimal conditions, providing a theoretical and practical basis for the engineering application of novel bacteria-algae symbiosis system in biogas slurry treatment and resource utilization.

Keywords: nutrient removal and recovery, leachate, anammox, Partial nitrification, Algae-bacteria interaction

Procedia PDF Downloads 19
51 Impact of Simulated Brain Interstitial Fluid Flow on the Chemokine CXC-Chemokine-Ligand-12 Release From an Alginate-Based Hydrogel

Authors: Wiam El Kheir, Anais Dumais, Maude Beaudoin, Bernard Marcos, Nick Virgilio, Benoit Paquette, Nathalie Faucheux, Marc-Antoine Lauzon

Abstract:

The high infiltrative pattern of glioblastoma multiforme cells (GBM) is the main cause responsible for the actual standard treatments failure. The tumor high heterogeneity, the interstitial fluid flow (IFF) and chemokines guides GBM cells migration in the brain parenchyma resulting in tumor recurrence. Drug delivery systems emerged as an alternative approach to develop effective treatments for the disease. Some recent studies have proposed to harness the effect CXC-lchemokine-ligand-12 to direct and control the cancer cell migration through delivery system. However, the dynamics of the brain environment on the delivery system remains poorly understood. Nanoparticles (NPs) and hydrogels are known as good carriers for the encapsulation of different agents and control their release. We studied the release of CXCL12 (free or loaded into NPs) from an alginate-based hydrogel under static and indirect perfusion (IP) conditions. Under static conditions, the main phenomena driving CXCL12 release from the hydrogel was diffusion with the presence of strong interactions between the positively charged CXCL12 and the negatively charge alginate. CXCL12 release profiles were independent from the initial mass loadings. Afterwards, we demonstrated that the release could tuned by loading CXCL12 into Alginate/Chitosan-Nanoparticles (Alg/Chit-NPs) and embedded them into alginate-hydrogel. The initial burst release was substantially attenuated and the overall cumulative release percentages of 21%, 16% and 7% were observed for initial mass loadings of 0.07, 0.13 and 0.26 µg, respectively, suggesting stronger electrostatic interactions. Results were mathematically modeled based on Fick’s second law of diffusion framework developed previously to estimate the effective diffusion coefficient (Deff) and the mass transfer coefficient. Embedding the CXCL12 into NPs decreased the Deff an order of magnitude, which was coherent with experimental data. Thereafter, we developed an in-vitro 3D model that takes into consideration the convective contribution of the brain IFF to study CXCL12 release in an in-vitro microenvironment that mimics as faithfully as possible the human brain. From is unique design, the model also allowed us to understand the effect of IP on CXCL12 release in respect to time and space. Four flow rates (0.5, 3, 6.5 and 10 µL/min) which may increase CXCL12 release in-vivo depending on the tumor location were assessed. Under IP, cumulative percentages varying between 4.5-7.3%, 23-58.5%, 77.8-92.5% and 89.2-95.9% were released for the three initial mass loadings of 0.08, 0.16 and 0.33 µg, respectively. As the flow rate increase, IP culture conditions resulted in a higher release of CXCL12 compared to static conditions as the convection contribution became the main driving mass transport phenomena. Further, depending on the flow rate, IP had a direct impact on CXCL12 distribution within the simulated brain tissue, which illustrates the importance of developing such 3D in-vitro models to assess the efficiency of a delivery system targeting the brain. In future work, using this very model, we aim to understand the impact of the different phenomenon occurring on GBM cell behaviors in response to the resulting chemokine gradient subjected to various flow while allowing them to express their invasive characteristics in an in-vitro microenvironment that mimics the in-vivo brain parenchyma.

Keywords: 3D culture system, chemokines gradient, glioblastoma multiforme, kinetic release, mathematical modeling

Procedia PDF Downloads 57
50 Translation, Cross-Cultural Adaption, and Validation of the Vividness of Movement Imagery Questionnaire 2 (VMIQ-2) to Classical Arabic Language

Authors: Majid Alenezi, Abdelbare Algamode, Amy Hayes, Gavin Lawrence, Nichola Callow

Abstract:

The purpose of this study was to translate and culturally adapt the Vividness of Movement Imagery Questionnaire-2 (VMIQ-2) from English to produce a new Arabic version (VMIQ-2A), and to evaluate the reliability and validity of the translated questionnaire. The questionnaire assesses how vividly and clearly individuals are able to imagine themselves performing everyday actions. Its purpose is to measure individuals’ ability to conduct movement imagery, which can be defined as “the cognitive rehearsal of a task in the absence of overt physical movement.” Movement imagery has been introduced in physiotherapy as a promising intervention technique, especially when physical exercise is not possible (e.g. pain, immobilisation.) Considerable evidence indicates movement imagery interventions improve physical function, but to maximize efficacy it is important to know the imagery abilities of the individuals being treated. Given the increase in the global sharing of knowledge it is desirable to use standard measures of imagery ability across language and cultures, thus motivating this project. The translation procedure followed guidelines from the Translation and Cultural Adaptation group of the International Society for Pharmacoeconomics and Outcomes Research and involved the following phases: Preparation; the original VMIQ-2 was adapted slightly to provide additional information and simplified grammar. Forward translation; three native speakers resident in Saudi Arabia translated the original VMIQ-2 from English to Arabic, following instruction to preserve meaning (not literal translation), and cultural relevance. Reconciliation; the project manager (first author), the primary translator and a physiotherapist reviewed the three independent translations to produce a reconciled first Arabic draft of VMIQ-2A. Backward translation; a fourth translator (native Arabic speaker fluent in English) translated literally the reconciled first Arabic draft to English. The project manager and two study authors compared the English back translation to the original VMIQ-2 and produced the second Arabic draft. Cognitive debriefing; to assess participants’ understanding of the second Arabic draft, 7 native Arabic speakers resident in the UK completed the questionnaire, and rated the clearness of the questions, specified difficult words or passages, and wrote in their own words their understanding of key terms. Following review of this feedback, a final Arabic version was created. 142 native Arabic speakers completed the questionnaire in community meeting places or at home; a subset of 44 participants completed the questionnaire a second time 1 week later. Results showed the translated questionnaire to be valid and reliable. Correlation coefficients indicated good test-retest reliability. Cronbach’s a indicated high internal consistency. Construct validity was tested in two ways. Imagery ability scores have been found to be invariant across gender; this result was replicated within the current study, assessed by independent-samples t-test. Additionally, experienced sports participants have higher imagery ability than those less experienced; this result was also replicated within the current study, assessed by analysis of variance, supporting construct validity. Results provide preliminary evidence that the VMIQ-2A is reliable and valid to be used with a general population who are native Arabic speakers. Future research will include validation of the VMIQ-2A in a larger sample, and testing validity in specific patient populations.

Keywords: motor imagery, physiotherapy, translation and validation, imagery ability

Procedia PDF Downloads 304
49 Magnetic Single-Walled Carbon Nanotubes (SWCNTs) as Novel Theranostic Nanocarriers: Enhanced Targeting and Noninvasive MRI Tracking

Authors: Achraf Al Faraj, Asma Sultana Shaik, Baraa Al Sayed

Abstract:

Specific and effective targeting of drug delivery systems (DDS) to cancerous sites remains a major challenge for a better diagnostic and therapy. Recently, SWCNTs with their unique physicochemical properties and the ability to cross the cell membrane show promising in the biomedical field. The purpose of this study was first to develop a biocompatible iron oxide tagged SWCNTs as diagnostic nanoprobes to allow their noninvasive detection using MRI and their preferential targeting in a breast cancer murine model by placing an optimized flexible magnet over the tumor site. Magnetic targeting was associated to specific antibody-conjugated SWCNTs active targeting. The therapeutic efficacy of doxorubicin-conjugated SWCNTs was assessed, and the superiority of diffusion-weighted (DW-) MRI as sensitive imaging biomarker was investigated. Short Polyvinylpyrrolidone (PVP) stabilized water soluble SWCNTs were first developed, tagged with iron oxide nanoparticles and conjugated with Endoglin/CD105 monoclonal antibodies. They were then conjugated with doxorubicin drugs. SWCNTs conjugates were extensively characterized using TEM, UV-Vis spectrophotometer, dynamic light scattering (DLS) zeta potential analysis and electron spin resonance (ESR) spectroscopy. Their MR relaxivities (i.e. r1 and r2*) were measured at 4.7T and their iron content and metal impurities quantified using ICP-MS. SWCNTs biocompatibility and drug efficacy were then evaluated both in vitro and in vivo using a set of immunological assays. Luciferase enhanced bioluminescence 4T1 mouse mammary tumor cells (4T1-Luc2) were injected into the right inguinal mammary fat pad of Balb/c mice. Tumor bearing mice received either free doxorubicin (DOX) drug or SWCNTs with or without either DOX or iron oxide nanoparticles. A multi-pole 10x10mm high-energy flexible magnet was maintained over the tumor site during 2 hours post-injections and their properties and polarity were optimized to allow enhanced magnetic targeting of SWCNTs toward the primary tumor site. Tumor volume was quantified during the follow-up investigation study using a fast spin echo MRI sequence. In order to detect the homing of SWCNTs to the main tumor site, susceptibility-weighted multi-gradient echo (MGE) sequence was used to generate T2* maps. Apparent diffusion coefficient (ADC) measurements were also performed as a sensitive imaging biomarker providing early and better assessment of disease treatment. At several times post-SWCNT injection, histological analysis were performed on tumor extracts and iron-loaded SWCNT were quantified using ICP-MS in tumor sites, liver, spleen, kidneys, and lung. The optimized multi-poles magnet revealed an enhanced targeting of magnetic SWCNTs to the primary tumor site, which was found to be much higher than the active targeting achieved using antibody-conjugated SWCNTs. Iron-loading allowed their sensitive noninvasive tracking after intravenous administration using MRI. The active targeting of doxorubicin through magnetic antibody-conjugated SWCNTs nanoprobes was found to considerably decrease the primary tumor site and may have inhibited the development of metastasis in the tumor-bearing mice lung. ADC measurements in DW-MRI were found to significantly increase in a time-dependent manner after the injection of DOX-conjugated SWCNTs complexes.

Keywords: single-walled carbon nanotubes, nanomedicine, magnetic resonance imaging, cancer diagnosis and therapy

Procedia PDF Downloads 302
48 Identification of a Panel of Epigenetic Biomarkers for Early Detection of Hepatocellular Carcinoma in Blood of Individuals with Liver Cirrhosis

Authors: Katarzyna Lubecka, Kirsty Flower, Megan Beetch, Lucinda Kurzava, Hannah Buvala, Samer Gawrieh, Suthat Liangpunsakul, Tracy Gonzalez, George McCabe, Naga Chalasani, James M. Flanagan, Barbara Stefanska

Abstract:

Hepatocellular carcinoma (HCC), the most prevalent type of primary liver cancer, is the second leading cause of cancer death worldwide. Late onset of clinical symptoms in HCC results in late diagnosis and poor disease outcome. Approximately 85% of individuals with HCC have underlying liver cirrhosis. However, not all cirrhotic patients develop cancer. Reliable early detection biomarkers that can distinguish cirrhotic patients who will develop cancer from those who will not are urgently needed and could increase the cure rate from 5% to 80%. We used Illumina-450K microarray to test whether blood DNA, an easily accessible source of DNA, bear site-specific changes in DNA methylation in response to HCC before diagnosis with conventional tools (pre-diagnostic). Top 11 differentially methylated sites were selected for validation by pyrosequencing. The diagnostic potential of the 11 pyrosequenced probes was tested in blood samples from a prospective cohort of cirrhotic patients. We identified 971 differentially methylated CpG sites in pre-diagnostic HCC cases as compared with healthy controls (P < 0.05, paired Wilcoxon test, ICC ≥ 0.5). Nearly 76% of differentially methylated CpG sites showed lower levels of methylation in cases vs. controls (P = 2.973E-11, Wilcoxon test). Classification of the CpG sites according to their location relative to CpG islands and transcription start site revealed that those hypomethylated loci are located in regulatory regions important for gene transcription such as CpG island shores, promoters, and 5’UTR at higher frequency than hypermethylated sites. Among 735 CpG sites hypomethylated in cases vs. controls, 482 sites were assigned to gene coding regions whereas 236 hypermethylated sites corresponded to 160 genes. Bioinformatics analysis using GO, KEGG and DAVID knowledgebase indicate that differentially methylated CpG sites are located in genes associated with functions that are essential for gene transcription, cell adhesion, cell migration, and regulation of signal transduction pathways. Taking into account the magnitude of the difference, statistical significance, location, and consistency across the majority of matched pairs case-control, we selected 11 CpG loci corresponding to 10 genes for further validation by pyrosequencing. We established that methylation of CpG sites within 5 out of those 10 genes distinguish cirrhotic patients who subsequently developed HCC from those who stayed cancer free (cirrhotic controls), demonstrating potential as biomarkers of early detection in populations at risk. The best predictive value was detected for CpGs located within BARD1 (AUC=0.70, asymptotic significance ˂0.01). Using an additive logistic regression model, we further showed that 9 CpG loci within those 5 genes, that were covered in pyrosequenced probes, constitute a panel with high diagnostic accuracy (AUC=0.887; 95% CI:0.80-0.98). The panel was able to distinguish pre-diagnostic cases from cirrhotic controls free of cancer with 88% sensitivity at 70% specificity. Using blood as a minimally invasive material and pyrosequencing as a straightforward quantitative method, the established biomarker panel has high potential to be developed into a routine clinical test after validation in larger cohorts. This study was supported by Showalter Trust, American Cancer Society (IRG#14-190-56), and Purdue Center for Cancer Research (P30 CA023168) granted to BS.

Keywords: biomarker, DNA methylation, early detection, hepatocellular carcinoma

Procedia PDF Downloads 271
47 Recurrent Torsades de Pointes Post Direct Current Cardioversion for Atrial Fibrillation with Rapid Ventricular Response

Authors: Taikchan Lildar, Ayesha Samad, Suraj Sookhu

Abstract:

Atrial fibrillation with rapid ventricular response results in the loss of atrial kick and shortened ventricular filling time, which often leads to decompensated heart failure. Pharmacologic rhythm control is the treatment of choice, and patients frequently benefit from the restoration of sinus rhythm. When pharmacologic treatment is unsuccessful or a patient declines hemodynamically, direct cardioversion is the treatment of choice. Torsades de pointes or “twisting of the points'' in French, is a rare but under-appreciated risk of cardioversion therapy and accounts for a significant number of sudden cardiac death each year. A 61-year-old female with no significant past medical history presented to the Emergency Department with worsening dyspnea. An electrocardiogram showed atrial fibrillation with rapid ventricular response, and a chest X-ray was significant for bilateral pulmonary vascular congestion. Full-dose anticoagulation and diuresis were initiated with moderate improvement in symptoms. A transthoracic echocardiogram revealed biventricular systolic dysfunction with a left ventricular ejection fraction of 30%. After consultation with an electrophysiologist, the consensus was to proceed with the restoration of sinus rhythm, which would likely improve the patient’s heart failure symptoms and possibly the ejection fraction. A transesophageal echocardiogram was negative for left atrial appendage thrombus; the patient was treated with a loading dose of amiodarone and underwent successful direct current cardioversion with 200 Joules. The patient was placed on telemetry monitoring for 24 hours and was noted to have frequent premature ventricular contractions with subsequent degeneration to torsades de pointes. The patient was found unresponsive and pulseless; cardiopulmonary resuscitation was initiated with cardioversion, and return of spontaneous circulation was achieved after four minutes to normal sinus rhythm. Post-cardiac arrest electrocardiogram showed sinus bradycardia with heart-rate corrected QT interval of 592 milliseconds. The patient continued to have frequent premature ventricular contractions and required two additional cardioversions to achieve a return of spontaneous circulation with intravenous magnesium and lidocaine. An automatic implantable cardioverter-defibrillator was subsequently implanted for secondary prevention of sudden cardiac death. The backup pacing rate of the automatic implantable cardioverter-defibrillator was set higher than usual in an attempt to prevent premature ventricular contractions-induced torsades de pointes. The patient did not have any further ventricular arrhythmias after implantation of the automatic implantable cardioverter-defibrillator. Overdrive pacing is a method utilized to treat premature ventricular contractions-induced torsades de pointes by preventing a patient’s susceptibility to R on T-wave-induced ventricular arrhythmias. Pacing at a rate of 90 beats per minute succeeded in controlling the arrhythmia without the need for traumatic cardiac defibrillation. In our patient, conversion of atrial fibrillation with rapid ventricular response to normal sinus rhythm resulted in a slower heart rate and an increased probability of premature ventricular contraction occurring on the T-wave and ensuing ventricular arrhythmia. This case highlights direct current cardioversion for atrial fibrillation with rapid ventricular response resulting in persistent ventricular arrhythmia requiring an automatic implantable cardioverter-defibrillator placement with overdrive pacing to prevent a recurrence.

Keywords: refractory atrial fibrillation, atrial fibrillation, overdrive pacing, torsades de pointes

Procedia PDF Downloads 107
46 The Development of the Geological Structure of the Bengkulu Fore Arc Basin, Western Edge of Sundaland, Sumatra, and Its Relationship to Hydrocarbon Trapping Mechanism

Authors: Lauti Dwita Santy, Hermes Panggabean, Syahrir Andi Mangga

Abstract:

The Bengkulu Basin is part of the Sunda Arc system, which is a classic convergent type margin that occur around the southern rim of the Eurasian continental (Sundaland) plate. The basin is located between deep sea trench (Mentawai Outer Arc high) and the volvanic/ magmatic Arc of the Barisan Mountains Range. To the northwest it is bounded by Padang High, to the northest by Barisan Mountains (Sumatra Fault Zone) to the southwest by Mentawai Fault Zone and to the southeast by Semangko High/ Sunda Strait. The stratigraphic succession and tectonic development can be broadly divided into four stage/ periods, i.e Late Jurassic- Early Cretaceous, Late Eocene-Early Oligocene, Late Oligocene-Early Miocene, Middle Miocene-Late Miocene and Pliocene-Plistocene, which are mainly controlled by the development of subduction activities. The Pre Tertiary Basement consist of sedimentary and shallow water limestone, calcareous mudstone, cherts and tholeiitic volcanic rocks, with Late Jurassic to Early Cretaceous in age. The sedimentation in this basin is depend on the relief of the Pre Tertiary Basement (Woyla Terrane) and occured into two stages, i.e. transgressive stage during the Latest Oligocene-Early Middle Miocene Seblat Formation, and the regressive stage during the Latest Middle Miocene-Pleistocene (Lemau, Simpangaur and Bintunan Formations). The Pre-Tertiary Faults were more intensive than the overlying cover, The Tertiary Rocks. There are two main fault trends can be distinguished, Northwest–Southwest Faults and Northeast-Southwest Faults. The NW-SE fault (Ketaun) are commonly laterally persistent, are interpreted to the part of Sumatran Fault Systems. They commonly form the boundaries to the Pre Tertiary basement highs and therefore are one of the faults elements controlling the geometry and development of the Tertiary sedimentary basins.The Northeast-Southwest faults was formed a conjugate set to the Northwest–Southeast Faults. In the earliest Tertiary and reactivated during the Plio-Pleistocene in a compressive mode with subsequent dextral displacement. The Block Faulting accross these two sets of faults related to approximate North–South compression in Paleogene time and produced a series of elongate basins separated by basement highs in the backarc and forearc region. The Bengkulu basin is interpreted having evolved from pull apart feature in the area southwest of the main Sumatra Fault System related to NW-SE trending in dextral shear.Based on Pyrolysis Yield (PY) vs Total Organic Carbon (TOC) diagram show that Seblat and Lemau Formation belongs to oil and Gas Prone with the quality of the source rocks includes into excellent and good (Lemau Formation), Fair and Poor (Seblat Formation). The fine-grained carbonaceous sediment of the Seblat dan Lemau Formations as source rocks, the coarse grained and carbonate sediments of the Seblat and Lemau Formations as reservoir rocks, claystone bed in Seblat and Lemau Formation as caprock. The source rocks maturation are late immature to early mature, with kerogen type II and III (Seblat Formation), and late immature to post mature with kerogen type I and III (Lemau Formation). The burial history show to 2500 m in depthh with paleo temperature reached 80oC. Trapping mechanism occur during Oligo–Miocene and Middle Miocene, mainly in block faulting system.

Keywords: fore arc, bengkulu, sumatra, sundaland, hydrocarbon, trapping mechanism

Procedia PDF Downloads 538
45 Adaptable Path to Net Zero Carbon: Feasibility Study of Grid-Connected Rooftop Solar PV Systems with Rooftop Rainwater Harvesting to Decrease Urban Flooding in India

Authors: Rajkumar Ghosh, Ananya Mukhopadhyay

Abstract:

India has seen enormous urbanization in recent years, resulting in increased energy consumption and water demand in its metropolitan regions. Adoption of grid-connected solar rooftop systems and rainwater collection has gained significant popularity in urban areas to address these challenges while also boosting sustainability and environmental consciousness. Grid-connected solar rooftop systems offer a long-term solution to India's growing energy needs. Solar panels are erected on the rooftops of residential and commercial buildings to generate power by utilizing the abundant solar energy available across the country. Solar rooftop systems generate clean, renewable electricity, reducing reliance on fossil fuels and lowering greenhouse gas emissions. This is compatible with India's goal of reducing its carbon footprint. Urban residents and companies can save money on electricity by generating their own and possibly selling excess power back to the grid through net metering arrangements. India gives several financial incentives (subsidies 40% for system capacity 1 kW to 3 kW) to stimulate the building of solar rooftop systems, making them an economically viable option for city dwellers. India provides subsidies up to 70% to special states such as Uttarakhand, Sikkim, Himachal Pradesh, Jammu & Kashmir, and Lakshadweep. Incorporating solar rooftops into urban infrastructure contributes to sustainable urban expansion by alleviating pressure on traditional energy sources and improving air quality. Incorporating solar rooftops into urban infrastructure contributes to sustainable urban expansion by alleviating demand on existing energy sources and improving power supply reliability. Rainwater harvesting is another key component of India's sustainable urban development. It comprises collecting and storing rainwater for use in non-potable water applications such as irrigation, toilet flushing, and groundwater recharge. Rainwater gathering 2 helps to conserve water resources by lowering the demand for freshwater sources. This technology is crucial in water-stressed areas to ensure a sustainable water supply. Excessive rainwater runoff in metropolitan areas can lead to Urban flooding. Solar PV system with Rooftop Rainwater harvesting systems absorb and channel excess rainwater, which helps to reduce flooding and waterlogging in Smart cities. Rainwater harvesting systems are inexpensive and quick to set up, making them a tempting option for city dwellers and businesses looking to save money on water. Rainwater harvesting systems are now compulsory in several Indian states for specified types of buildings (bye law, Rooftop space ≥ 300 sq. m.), ensuring widespread adoption. Finally, grid-connected solar rooftop systems and rainwater collection are important to India's long-term urban development. They not only reduce the environmental impact of urbanization, but also empower individuals and businesses to control their energy and water requirements. The G20 summit will focus on green financing, fossil fuel phaseout, and renewable energy transition. The G20 Summit in New Delhi reaffirmed India's commitment to battle climate change by doubling renewable energy capacity. To address climate change and mitigate global warming, India intends to attain 280 GW of solar renewable energy by 2030 and Net Zero carbon emissions by 2070. With continued government support and increased awareness, these strategies will help India develop a more resilient and sustainable urban future.

Keywords: grid-connected solar PV system, rooftop rainwater harvesting, urban flood, groundwater, urban flooding, net zero carbon emission

Procedia PDF Downloads 55
44 Physico-Chemical Characterization of Vegetable Oils from Oleaginous Seeds (Croton megalocarpus, Ricinus communis L., and Gossypium hirsutum L.)

Authors: Patrizia Firmani, Sara Perucchini, Irene Rapone, Raffella Borrelli, Stefano Chiaberge, Manuela Grande, Rosamaria Marrazzo, Alberto Savoini, Andrea Siviero, Silvia Spera, Fabio Vago, Davide Deriu, Sergio Fanutti, Alessandro Oldani

Abstract:

According to the Renewable Energy Directive II, the use of palm oil in diesel will be gradually reduced from 2023 and should reach zero in 2030 due to the deforestation caused by its production. Eni aims at finding alternative feedstocks for its biorefineries to eliminate the use of palm oil by 2023. Therefore, the ideal vegetable oils to be used in bio-refineries are those obtainable from plants that grow in marginal lands and with low impact on food-and-feed chain; hence, Eni research is studying the possibility of using oleaginous seeds, such as castor, croton, and cotton, to extract the oils to be exploited as feedstock in bio-refineries. To verify their suitability for the upgrading processes, an analytical protocol for their characterization has been drawn up and applied. The analytical characterizations include a step of water and ashes content determination, elemental analysis (CHNS analysis, X-Ray Fluorescence, Inductively Coupled Plasma - Optical Emission Spectroscopy, ICP– Mass Spectrometry), and total acid number determination. Gas chromatography coupled to flame ionization detector (GC-FID) is used to quantify the lipid content in terms of free fatty acids, mono-, di- and triacylglycerols, and fatty acids composition. Eventually, Nuclear Magnetic Resonance and Fourier Transform-Infrared spectroscopies are exploited with GC-MS and Fourier Transform-Ion Cyclotron Resonance to study the composition of the oils. This work focuses on the GC-FID analysis of the lipid fraction of these oils, as the main constituent and of greatest interest for bio-refinery processes. Specifically, the lipid component of the extracted oil was quantified after sample silanization and transmethylation: silanization allows the elution of high-boiling compounds and is useful for determining the quantity of free acids and glycerides in oils, while transmethylation leads to a mixture of fatty acid esters and glycerol, thus allowing to evaluate the composition of glycerides in terms of Fatty Acids Methyl Esters (FAME). Cotton oil was extracted from cotton oilcake, croton oil was obtained by seeds pressing and seeds and oilcake ASE extraction, while castor oil comes from seed pressing (not performed in Eni laboratories). GC-FID analyses reported that the cotton oil is 90% constituted of triglycerides and about 6% diglycerides, while free fatty acids are about 2%. In terms of FAME, C18 acids make up 70% of the total and linoleic acid is the major constituent. Palmitic acid is present at 17.5%, while the other acids are in low concentration (<1%). Both analyzes show the presence of non-gas chromatographable compounds. Croton oils from seed pressing and extraction mainly contain triglycerides (98%). Concerning FAME, the main component is linoleic acid (approx. 80%). Oilcake croton oil shows higher abundance of diglycerides (6% vs ca 2%) and a lower content of triglycerides (38% vs 98%) compared to the previous oils. Eventually, castor oil is mostly constituted of triacylglycerols (about 69%), followed by diglycerides (about 10%). About 85.2% of total FAME is ricinoleic acid, as a constituent of triricinolein, the most abundant triglyceride of castor oil. Based on the analytical results, these oils represent feedstocks of interest for possible exploitation as advanced biofuels.

Keywords: analytical protocol, biofuels, biorefinery, gas chromatography, vegetable oil

Procedia PDF Downloads 112
43 Geomechanics Properties of Tuzluca (Eastern. Turkey) Bedded Rock Salt and Geotechnical Safety

Authors: Mehmet Salih Bayraktutan

Abstract:

Geomechanical properties of Rock Salt Deposits in Tuzluca Salt Mine Area (Eastern Turkey) are studied for modeling the operation- excavation strategy. The purpose of this research focused on calculating the critical value of span height- which will meet the safety requirements. The Mine Site Tuzluca Hills consist of alternating parallel bedding of Salt ( NaCl ) and Gypsum ( CaS04 + 2 H20) rocks. Rock Salt beds are more resistant than narrow Gypsum interlayers. Rock Salt beds formed almost 97 percent of the total height of the Hill. Therefore, the geotechnical safety of Galleries depends on the mechanical criteria of Rock Salt Cores. General deposition of Tuzluca Basin was finally completed by Tuzluca Evaporites, as for the uppermost stratigraphic unit. They are currently running mining operations performed by classic mechanical excavation, room and pillar method. Rooms and Pillars are currently experiencing an initial stage of fracturing in places. Geotechnical safety of the whole mining area evaluated by Rock Mass Rating (RMR), Rock Quality Designation (RQD) spacing of joints, and the interaction of groundwater and fracture system. In general, bedded rock salt Show large lateral deformation capacity (while deformation modulus stays in relative small values, here E= 9.86 GPa). In such litho-stratigraphic environments, creep is a critical mechanism in failure. Rock Salt creep rate in steady-state is greater than interbedding layers. Under long-lasted compressive stresses, creep may cause shear displacements, partly using bedding planes. Eventually, steady-state creep in time returns to accelerated stages. Uniaxial compression creep tests on specimens were performed to have an idea of rock salt strength. To give an idea, on Rock Salt cores, average axial strength and strain are found as 18 - 24 MPa and 0.43-0.45 %, respectively. Uniaxial Compressive strength of 26- 32 MPa, from bedded rock salt cores. Elastic modulus is comparatively low, but lateral deformation of the rock salt is high under the uniaxial compression stress state. Poisson ratio = 0.44, break load = 156 kN, cohesion c= 12.8 kg/cm2, specific gravity SG=2.17 gr/cm3. Fracture System; spacing of fractures, joints, faults, offsets are evaluated under acting geodynamic mechanism. Two sand beds, each 4-6 m thick, exist near to upper level and at the top of the evaporating sequence. They act as aquifers and keep infiltrated water on top for a long duration, which may result in the failure of roofs or pillars. Two major active seismic ( N30W and N70E ) striking Fault Planes and parallel fracture strands have seismically triggered moderate risk of structural deformation of rock salt bedding sequence. Earthquakes and Floods are two prevailing sources of geohazards in this region—the seismotectonic activity of the Mine Site based on the crossing framework of Kagizman Faults and Igdir Faults. Dominant Hazard Risk sources include; a) Weak mechanical properties of rock salt, gypsum, anhydrite beds-creep. b) Physical discontinuities cutting across the thick parallel layers of Evaporite Mass, c) Intercalated beds of weak cemented or loose sand, clayey sandy sediments. On the other hand, absorbing the effects of salt-gyps parallel bedded deposits on seismic wave amplitudes has a reducing effect on the Rock Mass.

Keywords: bedded rock salt, creep, failure mechanism, geotechnical safety

Procedia PDF Downloads 172
42 The Ecuador Healthy Food Environment Policy Index (Food-EPI)

Authors: Samuel Escandón, María J. Peñaherrera-Vélez, Signe Vargas-Rosvik, Carlos Jerves Córdova, Ximena Vélez-Calvo, Angélica Ochoa-Avilés

Abstract:

Overweight and obesity are considered risk factors in childhood for developing nutrition-related non-communicable diseases (NCDs), such as diabetes, cardiovascular diseases, and cancer. In Ecuador, 35.4% of 5- to 11-year-olds and 29.6% of 12- to 19-year-olds are overweight or obese. Globally, unhealthy food environments characterized by high consumption of processed/ultra-processed food and rapid urbanization are highly related to the increasing nutrition-related non-communicable diseases. The evidence shows that in low- and middle-income countries (LMICs), fiscal policies and regulatory measures significantly reduce unhealthy food environments, achieving substantial advances in health. However, in some LMICs, little is known about the impact of governments' action to implement healthy food-environment policies. This study aimed to generate evidence on the state of implementation of public policy focused on food environments for the prevention of overweight and obesity in children and adolescents in Ecuador compared to global best practices and to target key recommendations for reinforcing the current strategies. After adapting the INFORMAS' Healthy Food Environment Policy Index (Food‐EPI) to the Ecuadorian context, the Policy and Infrastructure support components were assessed. Individual online interviews were performed using fifty-one indicators to analyze the level of implementation of policies directly or indirectly related to preventing overweight and obesity in children and adolescents compared to international best practices. Additionally, a participatory workshop was conducted to identify the critical indicators and generate recommendations to reinforce or improve the political action around them. In total, 17 government and non-government experts were consulted. From 51 assessed indicators, only the one corresponding to the nutritional information and ingredients labelling registered an implementation level higher than 60% (67%) compared to the best international practices. Among the 17 indicators determined as priorities by the participants, those corresponding to the provision of local products in school meals and the limitation of unhealthy-products promotion in traditional and digital media had the lowest level of implementation (34% and 11%, respectively) compared to global best practices. The participants identified more barriers (e.g., lack of continuity of effective policies across government administrations) than facilitators (e.g., growing interest from the Ministry of Environment because of the eating-behavior environmental impact) for Ecuador to move closer to the best international practices. Finally, within the participants' recommendations, we highlight the need for policy-evaluation systems, information transparency on the impact of the policies, transformation of successful strategies into laws or regulations to make them mandatory, and regulation of power and influence from the food industry (conflicts of interest). Actions focused on promoting a more active role of society in the stages of policy formation and achieving more articulated actions between the different government levels/institutions for implementing the policy are necessary to generate a noteworthy impact on preventing overweight and obesity in children and adolescents. Including systems for internal evaluation of existing strategies to strengthen successful actions, create policies to fill existing gaps and reform policies that do not generate significant impact should be a priority for the Ecuadorian government to improve the country's food environments.

Keywords: children and adolescents, food-EPI, food policies, healthy food environment

Procedia PDF Downloads 37
41 Assessing Diagnostic and Evaluation Tools for Use in Urban Immunisation Programming: A Critical Narrative Review and Proposed Framework

Authors: Tim Crocker-Buque, Sandra Mounier-Jack, Natasha Howard

Abstract:

Background: Due to both the increasing scale and speed of urbanisation, urban areas in low and middle-income countries (LMICs) host increasingly large populations of under-immunized children, with the additional associated risks of rapid disease transmission in high-density living environments. Multiple interdependent factors are associated with these coverage disparities in urban areas and most evidence comes from relatively few countries, e.g., predominantly India, Kenya, Nigeria, and some from Pakistan, Iran, and Brazil. This study aimed to identify, describe, and assess the main tools used to measure or improve coverage of immunisation services in poor urban areas. Methods: Authors used a qualitative review design, including academic and non-academic literature, to identify tools used to improve coverage of public health interventions in urban areas. Authors selected and extracted sources that provided good examples of specific tools, or categories of tools, used in a context relevant to urban immunization. Diagnostic (e.g., for data collection, analysis, and insight generation) and programme tools (e.g., for investigating or improving ongoing programmes) and interventions (e.g., multi-component or stand-alone with evidence) were selected for inclusion to provide a range of type and availability of relevant tools. These were then prioritised using a decision-analysis framework and a tool selection guide for programme managers developed. Results: Authors reviewed tools used in urban immunisation contexts and tools designed for (i) non-immunization and/or non-health interventions in urban areas, and (ii) immunisation in rural contexts that had relevance for urban areas (e.g., Reaching every District/Child/ Zone). Many approaches combined several tools and methods, which authors categorised as diagnostic, programme, and intervention. The most common diagnostic tools were cross-sectional surveys, key informant interviews, focus group discussions, secondary analysis of routine data, and geographical mapping of outcomes, resources, and services. Programme tools involved multiple stages of data collection, analysis, insight generation, and intervention planning and included guidance documents from WHO (World Health Organisation), UNICEF (United Nations Children's Fund), USAID (United States Agency for International Development), and governments, and articles reporting on diagnostics, interventions, and/or evaluations to improve urban immunisation. Interventions involved service improvement, education, reminder/recall, incentives, outreach, mass-media, or were multi-component. The main gaps in existing tools were an assessment of macro/policy-level factors, exploration of effective immunization communication channels, and measuring in/out-migration. The proposed framework uses a problem tree approach to suggest tools to address five common challenges (i.e. identifying populations, understanding communities, issues with service access and use, improving services, improving coverage) based on context and available data. Conclusion: This study identified many tools relevant to evaluating urban LMIC immunisation programmes, including significant crossover between tools. This was encouraging in terms of supporting the identification of common areas, but problematic as data volumes, instructions, and activities could overwhelm managers and tools are not always suitably applied to suitable contexts. Further research is needed on how best to combine tools and methods to suit local contexts. Authors’ initial framework can be tested and developed further.

Keywords: health equity, immunisation, low and middle-income countries, poverty, urban health

Procedia PDF Downloads 119
40 Artificial Intelligence Impact on the Australian Government Public Sector

Authors: Jessica Ho

Abstract:

AI has helped government, businesses and industries transform the way they do things. AI is used in automating tasks to improve decision-making and efficiency. AI is embedded in sensors and used in automation to help save time and eliminate human errors in repetitive tasks. Today, we saw the growth in AI using the collection of vast amounts of data to forecast with greater accuracy, inform decision-making, adapt to changing market conditions and offer more personalised service based on consumer habits and preferences. Government around the world share the opportunity to leverage these disruptive technologies to improve productivity while reducing costs. In addition, these intelligent solutions can also help streamline government processes to deliver more seamless and intuitive user experiences for employees and citizens. This is a critical challenge for NSW Government as we are unable to determine the risk that is brought by the unprecedented pace of adoption of AI solutions in government. Government agencies must ensure that their use of AI complies with relevant laws and regulatory requirements, including those related to data privacy and security. Furthermore, there will always be ethical concerns surrounding the use of AI, such as the potential for bias, intellectual property rights and its impact on job security. Within NSW’s public sector, agencies are already testing AI for crowd control, infrastructure management, fraud compliance, public safety, transport, and police surveillance. Citizens are also attracted to the ease of use and accessibility of AI solutions without requiring specialised technical skills. This increased accessibility also comes with balancing a higher risk and exposure to the health and safety of citizens. On the other side, public agencies struggle with keeping up with this pace while minimising risks, but the low entry cost and open-source nature of generative AI led to a rapid increase in the development of AI powered apps organically – “There is an AI for That” in Government. Other challenges include the fact that there appeared to be no legislative provisions that expressly authorise the NSW Government to use an AI to make decision. On the global stage, there were too many actors in the regulatory space, and a sovereign response is needed to minimise multiplicity and regulatory burden. Therefore, traditional corporate risk and governance framework and regulation and legislation frameworks will need to be evaluated for AI unique challenges due to their rapidly evolving nature, ethical considerations, and heightened regulatory scrutiny impacting the safety of consumers and increased risks for Government. Creating an effective, efficient NSW Government’s governance regime, adapted to the range of different approaches to the applications of AI, is not a mere matter of overcoming technical challenges. Technologies have a wide range of social effects on our surroundings and behaviours. There is compelling evidence to show that Australia's sustained social and economic advancement depends on AI's ability to spur economic growth, boost productivity, and address a wide range of societal and political issues. AI may also inflict significant damage. If such harm is not addressed, the public's confidence in this kind of innovation will be weakened. This paper suggests several AI regulatory approaches for consideration that is forward-looking and agile while simultaneously fostering innovation and human rights. The anticipated outcome is to ensure that NSW Government matches the rising levels of innovation in AI technologies with the appropriate and balanced innovation in AI governance.

Keywords: artificial inteligence, machine learning, rules, governance, government

Procedia PDF Downloads 44
39 Stabilizing Additively Manufactured Superalloys at High Temperatures

Authors: Keivan Davami, Michael Munther, Lloyd Hackel

Abstract:

The control of properties and material behavior by implementing thermal-mechanical processes is based on mechanical deformation and annealing according to a precise schedule that will produce a unique and stable combination of grain structure, dislocation substructure, texture, and dispersion of precipitated phases. The authors recently developed a thermal-mechanical technique to stabilize the microstructure of additively manufactured nickel-based superalloys even after exposure to high temperatures. However, the mechanism(s) that controls this stability is still under investigation. Laser peening (LP), also called laser shock peening (LSP), is a shock based (50 ns duration) post-processing technique used for extending performance levels and improving service life of critical components by developing deep levels of plastic deformation, thereby generating high density of dislocations and inducing compressive residual stresses in the surface and deep subsurface of components. These compressive residual stresses are usually accompanied with an increase in hardness and enhance the material’s resistance to surface-related failures such as creep, fatigue, contact damage, and stress corrosion cracking. While the LP process enhances the life span and durability of the material, the induced compressive residual stresses relax at high temperatures (>0.5Tm, where Tm is the absolute melting temperature), limiting the applicability of the technology. At temperatures above 0.5Tm, the compressive residual stresses relax, and yield strength begins to drop dramatically. The principal reason is the increasing rate of solid-state diffusion, which affects both the dislocations and the microstructural barriers. Dislocation configurations commonly recover by mechanisms such as climbing and recombining rapidly at high temperatures. Furthermore, precipitates coarsen, and grains grow; virtually all of the available microstructural barriers become ineffective.Our results indicate that by using “cyclic” treatments with sequential LP and annealing steps, the compressive stresses survive, and the microstructure is stable after exposure to temperatures exceeding 0.5Tm for a long period of time. When the laser peening process is combined with annealing, dislocations formed as a result of LPand precipitates formed during annealing have a complex interaction that provides further stability at high temperatures. From a scientific point of view, this research lays the groundwork for studying a variety of physical, materials science, and mechanical engineering concepts. This research could lead to metals operating at higher sustained temperatures enabling improved system efficiencies. The strengthening of metals by a variety of means (alloying, work hardening, and other processes) has been of interest for a wide range of applications. However, the mechanistic understanding of the often complex processes of interactionsbetween dislocations with solute atoms and with precipitates during plastic deformation have largely remained scattered in the literature. In this research, the elucidation of the actual mechanisms involved in the novel cyclic LP/annealing processes as a scientific pursuit is investigated through parallel studies of dislocation theory and the implementation of advanced experimental tools. The results of this research help with the validation of a novel laser processing technique for high temperature applications. This will greatly expand the applications of the laser peening technology originally devised only for temperatures lower than half of the melting temperature.

Keywords: laser shock peening, mechanical properties, indentation, high temperature stability

Procedia PDF Downloads 125
38 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks

Authors: Christina Kirsch, Adam Hatzigiannis

Abstract:

Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.

Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis

Procedia PDF Downloads 89