Search results for: convex index
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3664

Search results for: convex index

64 Biotite from Contact-Metamorphosed Rocks of the Dizi Series of the Greater Caucasus

Authors: Irakli Javakhishvili, Tamara Tsutsunava, Giorgi Beridze

Abstract:

The Caucasus is a component of the Mediterranean collision belt. The Dizi series is situated within the Greater Caucasian region of the Caucasus and crops out in the core of the Svaneti anticlinorium. The series was formed in the continental slope conditions on the southern passive margin of the small ocean basin. The Dizi series crops out on about 560 square km with the thickness 2000-2200 m. The rocks are faunally dated from the Devonian to the Triassic inclusive. The series is composed of terrigenous phyllitic schists, sandstones, quartzite aleurolites and lenses and interlayers of marbleized limestones. During the early Cimmerian orogeny, they underwent regional metamorphism of chlorite-sericite subfacies of greenschist facies. Typical minerals of metapelites are chlorite, sericite, augite, quartz, and tourmaline, but of basic rocks - actinolite, fibrolite, prehnite, calcite, and chlorite are developed. Into the Dizi series, polyphase intrusions of gabbros, diorites, quartz-diorites, syenite-diorites, syenites, and granitoids are intruded. Their K-Ar age dating (176-165Ma) points out that their formation corresponds to the Bathonian orogeny. The Dizi series is well-studied geologically, but very complicated processes of its regional and contact metamorphisms are insufficiently investigated. The aim of the authors was a detailed study of contact metamorphism processes of the series rocks. Investigations were accomplished applying the following methodologies: finding of key sections, a collection of material, microscopic study of samples, microprobe and structural analysis of minerals and X-ray determination of elements. The Dizi series rocks formed under the influence of the Bathonian magmatites on metapelites and carbonate-enriched rocks. They are represented by quartz, biotite, sericite, graphite, andalusite, muscovite, plagioclase, corundum, cordierite, clinopyroxene, hornblende, cummingtonite, actinolite, and tremolite bearing hornfels, marbles, and skarns. The contact metamorphism aureole reaches 350 meters. Biotite is developed only in contact-metamorphosed rocks and is a rather informative index mineral. In metapelites, biotite is formed as a result of the reaction between phengite, chlorite, and leucoxene, but in basites, it replaces actinolite or actinolite-hornblende. To study the compositional regularities of biotites, they were investigated from both - metapelites and metabasites. In total, biotite from the basites is characterized by an increased of titanium in contrast to biotite from metapelites. Biotites from metapelites are distinguished by an increased amount of aluminum. In biotites an increased amount of titanium and aluminum is observed as they approximate the contact, while their magnesia content decreases. Metapelite biotites are characterized by an increased amount of alumina in aluminum octahedrals, in contrast to biotite of the basites. In biotites of metapelites, the amount of tetrahedric aluminum is 28–34%, octahedral - 15–26%, and in basites tetrahedral aluminum is 28–33%, and octahedral 7–21%. As a result of the study of minerals, including biotite, from the contact-metamorphosed rocks of the Dizi series three exocontact zones with corresponding mineral assemblages were identified. It was established that contact metamorphism in the aureole of the Dizi series intrusions is going on at a significantly higher temperature and lower pressure than the regional metamorphism preceding the contact metamorphism.

Keywords: biotite, contact metamorphism, Dizi series, the Greater Caucasus

Procedia PDF Downloads 131
63 Health Reforms in Central and Eastern European Countries: Results, Dynamics, and Outcomes Measure

Authors: Piotr Romaniuk, Krzysztof Kaczmarek, Adam Szromek

Abstract:

Background: A number of approaches to assess the performance of health system have been proposed so far. Nonetheless, they lack a consensus regarding the key components of assessment procedure and criteria of evaluation. The WHO and OECD have developed methods of assessing health system to counteract the underlying issues, but they are not free of controversies and did not manage to produce a commonly accepted consensus. The aim of the study: On the basis of WHO and OECD approaches we decided to develop own methodology to assess the performance of health systems in Central and Eastern European countries. We have applied the method to compare the effects of health systems reforms in 20 countries of the region, in order to evaluate the dynamic of changes in terms of health system outcomes.Methods: Data was collected from a 25-year time period after the fall of communism, subsetted into different post-reform stages. Datasets collected from individual countries underwent one-, two- or multi-dimensional statistical analyses, and the Synthetic Measure of health system Outcomes (SMO) was calculated, on the basis of the method of zeroed unitarization. A map of dynamics of changes over time across the region was constructed. Results: When making a comparative analysis of the tested group in terms of the average SMO value throughout the analyzed period, we noticed some differences, although the gaps between individual countries were small. The countries with the highest SMO were the Czech Republic, Estonia, Poland, Hungary and Slovenia, while the lowest was in Ukraine, Russia, Moldova, Georgia, Albania, and Armenia. Countries differ in terms of the range of SMO value changes throughout the analyzed period. The dynamics of change is high in the case of Estonia and Latvia, moderate in the case of Poland, Hungary, Czech Republic, Croatia, Russia and Moldova, and small when it comes to Belarus, Ukraine, Macedonia, Lithuania, and Georgia. This information reveals fluctuation dynamics of the measured value in time, yet it does not necessarily mean that in such a dynamic range an improvement appears in a given country. In reality, some of the countries moved from on the scale with different effects. Albania decreased the level of health system outcomes while Armenia and Georgia made progress, but lost distance to leaders in the region. On the other hand, Latvia and Estonia showed the most dynamic progress in improving the outcomes. Conclusions: Countries that have decided to implement comprehensive health reform have achieved a positive result in terms of further improvements in health system efficiency levels. Besides, a higher level of efficiency during the initial transition period generally positively determined the subsequent value of the efficiency index value, but not the dynamics of change. The paths of health system outcomes improvement are highly diverse between different countries. The instrument we propose constitutes a useful tool to evaluate the effectiveness of reform processes in post-communist countries, but more studies are needed to identify factors that may determine results obtained by individual countries, as well as to eliminate the limitations of methodology we applied.

Keywords: health system outcomes, health reforms, health system assessment, health system evaluation

Procedia PDF Downloads 289
62 Establishing Correlation between Urban Heat Island and Urban Greenery Distribution by Means of Remote Sensing and Statistics Data to Prioritize Revegetation in Yerevan

Authors: Linara Salikhova, Elmira Nizamova, Aleksandra Katasonova, Gleb Vitkov, Olga Sarapulova.

Abstract:

While most European cities conduct research on heat-related risks, there is a research gap in the Caucasus region, particularly in Yerevan, Armenia. This study aims to test the method of establishing a correlation between urban heat islands (UHI) and urban greenery distribution for prioritization of heat-vulnerable areas for revegetation. Armenia has failed to consider measures to mitigate UHI in urban development strategies despite a 2.1°C increase in average annual temperature over the past 32 years. However, planting vegetation in the city is commonly used to deal with air pollution and can be effective in reducing UHI if it prioritizes heat-vulnerable areas. The research focuses on establishing such priorities while considering the distribution of urban greenery across the city. The lack of spatially explicit air temperature data necessitated the use of satellite images to achieve the following objectives: (1) identification of land surface temperatures (LST) and quantification of temperature variations across districts; (2) classification of massifs of land surface types using normalized difference vegetation index (NDVI); (3) correlation of land surface classes with LST. Examination of the heat-vulnerable city areas (in this study, the proportion of individuals aged 75 years and above) is based on demographic data (Census 2011). Based on satellite images (Sentinel-2) captured on June 5, 2021, NDVI calculations were conducted. The massifs of the land surface were divided into five surface classes. Due to capacity limitations, the average LST for each district was identified using one satellite image from Landsat-8 on August 15, 2021. In this research, local relief is not considered, as the study mainly focuses on the interconnection between temperatures and green massifs. The average temperature in the city is 3.8°C higher than in the surrounding non-urban areas. The temperature excess ranges from a low in Norq Marash to a high in Nubarashen. Norq Marash and Avan have the highest tree and grass coverage proportions, with 56.2% and 54.5%, respectively. In other districts, the balance of wastelands and buildings is three times higher than the grass and trees, ranging from 49.8% in Quanaqer-Zeytun to 76.6% in Nubarashen. Studies have shown that decreased tree and grass coverage within a district correlates with a higher temperature increase. The temperature excess is highest in Erebuni, Ajapnyak, and Nubarashen districts. These districts have less than 25% of their area covered with grass and trees. On the other hand, Avan and Norq Marash districts have a lower temperature difference, as more than 50% of their areas are covered with trees and grass. According to the findings, a significant proportion of the elderly population (35%) aged 75 years and above reside in the Erebuni, Ajapnyak, and Shengavit neighborhoods, which are more susceptible to heat stress with an LST higher than in other city districts. The findings suggest that the method of comparing the distribution of green massifs and LST can contribute to the prioritization of heat-vulnerable city areas for revegetation. The method can become a rationale for the formation of an urban greening program.

Keywords: heat-vulnerability, land surface temperature, urban greenery, urban heat island, vegetation

Procedia PDF Downloads 70
61 Promotion of Healthy Food Choices in School Children through Nutrition Education

Authors: Vinti Davar

Abstract:

Introduction: Childhood overweight increases the risk for certain medical and psychological conditions. Millions of school-age children worldwide are affected by serious yet easily treatable and preventable illnesses that inhibit their ability to learn. Healthier children stay in school longer, attend more regularly, learn more and become healthier and more productive adults. Schools are an important setting for nutrition education because one can reach most children, teachers and parents. These years offer a key window for shaping their lifetime habits, which have an impact on their health throughout life. Against this background, an attempt was made to impart nutrition education to school children in Haryana state of India to promote healthy food choices and assess the effectiveness of this program. Methodology: This study was completed in two phases. During the first phase, pre-intervention anthropometric and dietary survey was conducted; the teaching materials for nutrition intervention program were developed and tested; and the questionnaire was validated. In the second phase, an intervention was implemented in two schools of Kurukshetra, Haryana for six months by personal visits once a week. A total of 350 children in the age group of 6-12 years were selected. Out of these, 279 children, 153 boys and 126 girls completed the study. The subjects were divided into four groups namely: underweight, normal, overweight and obese based on body mass index-for-age categories. A power point colorful presentation to improve the quality of tiffin, snacks and meals emphasizing inclusion of all food groups especially vegetables every day and fruits at least 3-4 days per week was used. An extra 20 minutes of aerobic exercise daily was likewise organized and a healthy school environment created. Provision of clean drinking water by school authorities was ensured. Selling of soft drinks and energy-dense snacks in the school canteen as well as advertisements about soft drink and snacks on the school walls were banned. Post intervention, anthropometric indices and food selections were reassessed. Results: The results of this study reiterate the critical role of nutrition education and promotion in improving the healthier food choices by school children. It was observed that normal, overweight and obese children participating in nutrition education intervention program significantly (p≤0.05) increased their daily seasonal fruit and vegetable consumption. Fat and oil consumption was significantly reduced by overweight and obese subjects. Fast food intake was controlled by obese children. The nutrition knowledge of school children significantly improved (p≤0.05) from pre to post intervention. A highly significant increase (p≤0.00) was noted in the nutrition attitude score after intervention in all four groups. Conclusion: This study has shown that a well-planned nutrition education program could improve nutrition knowledge and promote positive changes in healthy food choices. A nutrition program inculcates wholesome eating and active life style habits in children and adolescents that could not only prevent them from chronic diseases and early death but also reduce healthcare cost and enhance the quality of life of citizens and thereby nations.

Keywords: children, eating habits healthy food, obesity, school going, fast foods

Procedia PDF Downloads 203
60 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics

Authors: Varun Kumar, Chandra Shakher

Abstract:

Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.

Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy

Procedia PDF Downloads 497
59 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage

Authors: Andrew Laming, John Hattie, Mark Wilson

Abstract:

Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.  

Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean

Procedia PDF Downloads 67
58 Effects and Mechanisms of an Online Short-Term Audio-Based Mindfulness Intervention on Wellbeing in Community Settings and How Stress and Negative Affect Influence the Therapy Effects: Parallel Process Latent Growth Curve Modeling of a Randomized Control

Authors: Man Ying Kang, Joshua Kin Man Nan

Abstract:

The prolonged pandemic has posed alarming public health challenges to various parts of the world, and face-to-face mental health treatment is largely discounted for the control of virus transmission, online psychological services and self-help mental health kits have become essential. Online self-help mindfulness-based interventions have proved their effects on fostering mental health for different populations over the globe. This paper was to test the effectiveness of an online short-term audio-based mindfulness (SAM) program in enhancing wellbeing, dispositional mindfulness, and reducing stress and negative affect in community settings in China, and to explore possible mechanisms of how dispositional mindfulness, stress, and negative affect influenced the intervention effects on wellbeing. Community-dwelling adults were recruited via online social networking sites (e.g., QQ, WeChat, and Weibo). Participants (n=100) were randomized into the mindfulness group (n=50) and a waitlist control group (n=50). In the mindfulness group, participants were advised to spend 10–20 minutes listening to the audio content, including mindful-form practices (e.g., eating, sitting, walking, or breathing). Then practice daily mindfulness exercises for 3 weeks (a total of 21 sessions), whereas those in the control group received the same intervention after data collection in the mindfulness group. Participants in the mindfulness group needed to fill in the World Health Organization Five Well-Being Index (WHO), Positive and Negative Affect Schedule (PANAS), Perceived Stress Scale (PSS), and Freiburg Mindfulness Inventory (FMI) four times: at baseline (T0) and at 1 (T1), 2 (T2), and 3 (T3) weeks while those in the waitlist control group only needed to fill in the same scales at pre- and post-interventions. Repeated-measure analysis of variance, paired sample t-test, and independent sample t-test was used to analyze the variable outcomes of the two groups. The parallel process latent growth curve modeling analysis was used to explore the longitudinal moderated mediation effects. The dependent variable was WHO slope from T0 to T3, the independent variable was Group (1=SAM, 2=Control), the mediator was FMI slope from T0 to T3, and the moderator was T0NA and T0PSS separately. The different levels of moderator effects on WHO slope was explored, including low T0NA or T0PSS (Mean-SD), medium T0NA or T0PSS (Mean), and high T0NA or T0PSS (Mean+SD). The results found that SAM significantly improved and predicted higher levels of WHO slope and FMI slope, as well as significantly reduced NA and PSS. FMI slope positively predict WHO slope. FMI slope partially mediated the relationship between SAM and WHO slope. Baseline NA and PSS as the moderators were found to be significant between SAM and WHO slope and between SAM and FMI slope, respectively. The conclusion was that SAM was effective in promoting levels of mental wellbeing, positive affect, and dispositional mindfulness as well as reducing negative affect and stress in community settings in China. SAM improved wellbeing faster through the faster enhancement of dispositional mindfulness. Participants with medium-to-high negative affect and stress buffered the therapy effects of SAM on wellbeing improvement speed.

Keywords: mindfulness, negative affect, stress, wellbeing, randomized control trial

Procedia PDF Downloads 108
57 Chemical, Biochemical and Sensory Evaluation of a Quadrimix Complementary Food Developed from Sorghum, Groundnut, Crayfish and Pawpaw Blends

Authors: Ogechi Nzeagwu, Assumpta Osuagwu, Charlse Nkwoala

Abstract:

Malnutrition in infants due to poverty, poor feeding practices, and high cost of commercial complementary foods among others is a concern in developing countries. The study evaluated the proximate, vitamin and mineral compositions, antinutrients and functional properties, biochemical, haematological and sensory evaluation of complementary food made from sorghum, groundnut, crayfish and paw-paw flour blends using standard procedures. The blends were formulated on protein requirement of infants (18 g/day) using Nutrisurvey linear programming software in ratio of sorghum(S), groundnut(G), crayfish(C) and pawpaw(P) flours as 50:25:10:15(SGCP1), 60:20:10:10 (SGCP2), 60:15:15:10 (SGCP3) and 60:10:20:10 (SGCP4). Plain-pap (fermented maize flour)(TCF) and cerelac (commercial complementary food) served as basal and control diets. Thirty weanling male albino rats aged 28-35 days weighing 33-60 g were purchased and used for the study. The rats after acclimatization were fed with gruel produced with the experimental diets and the control with water ad libitum daily for 35days. Effect of the blends on lipid profile, blood glucose, haematological (RBC, HB, PCV, MCV), liver and kidney function and weight gain of the rats were assessed. Acceptability of the gruel was conducted at the end of rat feeding on forty mothers of infants’ ≥ 6 months who gave their informed consent to participate using a 9 point hedonic scale. Data was analyzed for means and standard deviation, analysis of variance and means were separated using Duncan multiple range test and significance judged at 0.05, all using SPSS version 22.0. The results indicated that crude protein, fibre, ash and carbohydrate of the formulated diets were either comparable or higher than values in cerelac. The formulated diets (SGCP1- SGCP4) were significantly (P>0.05) higher in vitamin A and thiamin compared to cerelac. The iron content of the formulated diets SGCP1- SGCP4 (4.23-6.36 mg/100) were within the recommended iron intake of infants (0.55 mg/day). Phytate (1.56-2.55 mg/100g) and oxalate (0.23-0.35 mg/100g) contents of the formulated diets were within the permissible limits of 0-5%. In functional properties, bulk density, swelling index, % dispersibility and water absorption capacity significantly (P<0.05) increased and compared favourably with cerelac. The essential amino acids of the formulated blends were within the amino acid profile of the FAO/WHO/UNU reference protein for children 0.5 -2 years of age. Urea concentration of rats fed with SGCP1-SGCP4 (19.48 mmol/L),(23.76 mmol/L),(24.07 mmol/L),(23.65 mmol/L) respectively was significantly higher than that of rat fed cerelac (16.98 mmol/L); however, plain pap had the least value (9.15 mmol/L). Rats fed with SGCP1-SGCP4 (116 mg/dl), (119 mg/dl), (115 mg/dl), (117 mg/dl) respectively had significantly higher glucose levels those fed with cerelac (108 mg/dl). Liver function parameters (AST, ALP and ALT), lipid profile (triglyceride, HDL, LDL, VLDL) and hematological parameters of rats fed with formulated diets were within normal range. Rats fed SGCP1 gained more weight (90.45 g) than other rats fed with SGCP2-SGCP4 (71.65 g, 79.76 g, 75.68 g), TCF (20.13 g) and cerelac (59.06 g). In all the sensory attributes, the control was preferred with respect to the formulated diets. The formulated diets were generally adequate and may likely have potentials to meet nutrient requirements of infants as complementary food.

Keywords: biochemical, chemical evaluation, complementary food, quadrimix

Procedia PDF Downloads 166
56 Environmental Impacts of Point and Non-Point Source Pollution in Krishnagiri Reservoir: A Case Study in South India

Authors: N. K. Ambujam, V. Sudha

Abstract:

Reservoirs are being contaminated all around the world with point source and Non-Point Source (NPS) pollution. The most common NPS pollutants are sediments and nutrients. Krishnagiri Reservoir (KR) has been chosen for the present case study, which is located in the tropical semi-arid climatic zone of Tamil Nadu, South India. It is the main source of surface water in Krishnagiri district to meet the freshwater demands. The reservoir has lost about 40% of its water holding capacity due to sedimentation over the period of 50 years. Hence, from the research and management perspective, there is a need for a sound knowledge on the spatial and seasonal variations of KR water quality. The present study encompasses the specific objectives as (i) to investigate the longitudinal heterogeneity and seasonal variations of physicochemical parameters, nutrients and biological characteristics of KR water and (ii) to examine the extent of degradation of water quality in KR. 15 sampling points were identified by uniform stratified method and a systematic monthly sampling strategy was selected due to high dynamic nature in its hydrological characteristics. The physicochemical parameters, major ions, nutrients and Chlorophyll a (Chl a) were analysed. Trophic status of KR was classified by using Carlson's Trophic State Index (TSI). All statistical analyses were performed by using Statistical Package for Social Sciences programme, version-16.0. Spatial maps were prepared for Chl a using Arc GIS. Observations in KR pointed out that electrical conductivity and major ions are highly variable factors as it receives inflow from the catchment with different land use activities. The study of major ions in KR exhibited different trends in their values and it could be concluded that as the monsoon progresses the major ions in the water decreases or water quality stabilizes. The inflow point of KR showed comparatively higher concentration of nutrients including nitrate, soluble reactive phosphorus (SRP), total phosphors (TP), total suspended phosphorus (TSP) and total dissolved phosphorus (TDP) during monsoon seasons. This evidently showed the input of significant amount of nutrients from the catchment side through agricultural runoff. High concentration of TDP and TSP at the lacustrine zone of the reservoir during summer season evidently revealed that there was a significant release of phosphorus from the bottom sediments. Carlson’s TSI of KR ranged between 81 and 92 during northeast monsoon and summer seasons. High and permanent Cyanobacterial bloom in KR could be mainly due to the internal loading of phosphorus from the bottom sediments. According to Carlson’s TSI classification Krishnagiri reservoir was ranked in the hyper-eutrophic category. This study provides necessary basic data on the spatio-temporal variations of water quality in KR and also proves the impact of point and NPS pollution from the catchment area. High TSI warrants a greater threat for the recovery of internal P loading and hyper-eutrophic condition of KR. Several expensive internal measures for the reduction of internal loading of P were introduced by many scientists. However, the outcome of the present research suggests for the innovative algae harvesting technique for the removal of sediment nutrients.

Keywords: NPS pollution, nutrients, hyper-eutrophication, krishnagiri reservoir

Procedia PDF Downloads 321
55 Quantifying Impairments in Whiplash-Associated Disorders and Association with Patient-Reported Outcomes

Authors: Harpa Ragnarsdóttir, Magnús Kjartan Gíslason, Kristín Briem, Guðný Lilja Oddsdóttir

Abstract:

Introduction: Whiplash-Associated Disorder (WAD) is a health problem characterized by motor, neurological and psychosocial symptoms, stressing the need for a multimodal treatment approach. To achieve individualized multimodal approach, prognostic factors need to be identified early using validated patient-reported and objective outcome measures. The aim of this study is to demonstrate the degree of association between patient-reported and clinical outcome measures of WAD patients in the subacute phase. Methods: Individuals (n=41) with subacute (≥1, ≤3 months) WAD (I-II), medium to high-risk symptoms, or neck pain rating ≥ 4/10 on the Visual Analog Scale (VAS) were examined. Outcome measures included measurements for movement control (Butterfly test) and cervical active range of motion (cAROM) using the NeckSmart system, a computer system using an inertial measurement unit (IMU) that connects to a computer. The IMU sensor is placed on the participant’s head, who receives visual feedback about the movement of the head. Patient-reported neck disability, pain intensity, general health, self-perceived handicap, central sensitization, and difficulties due to dizziness were measured using questionnaires. Excel and R statistical software were used for statistical analyses. Results: Forty-one participants, 15 males (37%), 26 females (63%), mean (SD) age 36.8 (±12.7), underwent data collection. Mean amplitude accuracy (AA) (SD) in the Butterfly test for easy, medium, and difficult paths were 2.4mm (0.9), 4.4mm (1.8), and 6.8mm (2.7), respectively. Mean cAROM (SD) for flexion, extension, left-, and right rotation were 46.3° (18.5), 48.8° (17.8), 58.2° (14.3), and 58.9° (15.0), respectively. Mean scores on the Neck Disability Index (NDI), VAS, Dizziness Handicap Inventory (DHI), Central Sensitization Inventory (CSI), and 36-Item Short Form Survey RAND version (RAND) were 43% (17.4), 7 (1.7), 37 (25.4), 51 (17.5), and 39.2 (17.7) respectively. Females showed significantly greater deviation for AA compared to males for easy and medium Butterfly paths (p<0.05). Statistically significant moderate to strong positive correlation was found between the DHI and easy (r=0.6, p=0.05), medium (r=0.5, p=0.05)) and difficult (r=0.5, p<0.05) Butterfly paths, between the total RAND score and all cAROMs (r between 0.4-0.7, p≤0.05) except flexion (r=0.4, p=0.7), and between the NDI score and CSI (r=0.7, p<0.01), VAS (r=0.5, p<0.01), and DHI (r=0.7, p<0.01) scores respectively. Discussion: All patient-reported and objective measures were found to be outside the reference range. Results suggest females have worse movement control in the neck in the subacute WAD phase. However, no statistical difference based on gender was found in patient-reported measures. Suggesting females might have worse movement control than males in general in this phase. The correlation found between DHI and the Butterfly test can be explained because the DHI measures proprioceptive symptoms like dizziness and eye movement disorders that can affect the outcome of movement control tests. A correlation was found between the total RAND score and cAROM, suggesting that a reduced range of motion affects the quality of life. Significance: The NeckSmart system can detect abnormalities in cAROM, fine movement control, and kinesthesia of the neck. Results suggest females have worse movement control than males. Results show a moderate to a high correlation between several patient-reported and objective measurements.

Keywords: whiplash associated disorders, car-collision, neck, trauma, subacute

Procedia PDF Downloads 69
54 Virulence Factors and Drug Resistance of Enterococci Species Isolated from the Intensive Care Units of Assiut University Hospitals, Egypt

Authors: Nahla Elsherbiny, Ahmed Ahmed, Hamada Mohammed, Mohamed Ali

Abstract:

Background: The enterococci may be considered as opportunistic agents particularly in immunocompromised patients. It is one of the top three pathogens causing many healthcare associated infections (HAIs). Resistance to several commonly used antimicrobial agents is a remarkable characteristic of most species which may carry various genes contributing to virulence. Objectives: to determine the prevalence of enterococci species in different intensive care units (ICUs) causing health care-associated infections (HAIs), intestinal carriage and environmental contamination. Also, to study the antimicrobial susceptibility pattern of the isolates with special reference to vancomycin resistance. In addition to phenotypic and genotypic detection of gelatinase, cytolysin and biofilm formation among isolates. Patients and Methods: This study was carried out in the infection control laboratory at Assiut University Hospitals over a period of one year. Clinical samples were collected from 285 patients with various (HAIs) acquired after admission to different ICUs. Rectal swabs were taken from 14 cases for detection of enterococci carriage. In addition, 1377 environmental samples were collected from the surroundings of the patients. Identification was done by conventional bacteriological methods and confirmed by analytical profile index (API). Antimicrobial sensitivity testing was performed by Kirby Bauer disc diffusion method and detection of vancomycin resistance was done by agar screen method. For the isolates, phenotypic detection of cytolysin, gelatinase production and detection of biofilm by tube method, Congo red method and microtiter plate. We performed polymerase chain reaction (PCR) for detection of some virulence genes (gelE, cylA, vanA, vanB and esp). Results: Enterococci caused 10.5% of the HAIs. Respiratory tract infection was the predominant type (86.7%). The commonest species were E.gallinarum (36.7%), E.casseliflavus (30%), E.faecalis (30%), and E.durans (3.4 %). Vancomycin resistance was detected in a total of 40% (12/30) of those isolates. The risk factors associated with acquiring vancomycin resistant enterococci (VRE) were immune suppression (P= 0.031) and artificial feeding (P= 0.008). For the rectal swabs, enterococci species were detected in 71.4% of samples with the predominance of E. casseliflavus (50%). Most of the isolates were vancomycin resistant (70%). Out of a total 1377 environmental samples, 577 (42%) samples were contaminated with different microorganisms. Enterococci were detected in 1.7% (10/577) of total contaminated samples, 50% of which were vancomycin resistant. All isolates were resistant to penicillin, ampicillin, oxacillin, ciprofloxacin, amikacin, erythromycin, clindamycin and trimethoprim-sulfamethaxazole. For the remaining antibiotics, variable percentages of resistance were reported. Cytolysin and gelatinase were detected phenotypically in 16% and 48 % of the isolates respectively. The microtiter plate method showed the highest percentages of detection of biofilm among all isolated species (100%). The studied virulence genes gelE, esp, vanA and vanB were detected in 62%, 12%, 2% and 12% respectively, while cylA gene was not detected in any isolates. Conclusions: A significant percentage of enterococci was isolated from patients and environments in the ICUs. Many virulence factors were detected phenotypically and genotypically among isolates. The high percentage of resistance, coupled with the risk of cross transmission to other patients make enterococci infections a significant infection control issue in hospitals.

Keywords: antimicrobial resistance, enterococci, ICUs, virulence factors

Procedia PDF Downloads 283
53 Environmentally Sustainable Transparent Wood: A Fully Green Approach from Bleaching to Impregnation for Energy-Efficient Engineered Wood Components

Authors: Francesca Gullo, Paola Palmero, Massimo Messori

Abstract:

Transparent wood is considered a promising structural material for the development of environmentally friendly, energy-efficient engineered components. To obtain transparent wood from natural wood materials two approaches can be used: i) bottom-up and ii) top-down. Through the second method, the color of natural wood samples is lightened through a chemical bleaching process that acts on chromophore groups of lignin, such as the benzene ring, quinonoid, vinyl, phenolics, and carbonyl groups. These chromophoric units form complex conjugate systems responsible for the brown color of wood. There are two strategies to remove color and increase the whiteness of wood: i) lignin removal and ii) lignin bleaching. In the lignin removal strategy, strong chemicals containing chlorine (chlorine, hypochlorite, and chlorine dioxide) and oxidizers (oxygen, ozone, and peroxide) are used to completely destroy and dissolve the lignin. In lignin bleaching methods, a moderate reductive (hydrosulfite) or oxidative (hydrogen peroxide) is commonly used to alter or remove the groups and chromophore systems of lignin, selectively discoloring the lignin while keeping the macrostructure intact. It is, therefore, essential to manipulate nanostructured wood by precisely controlling the nanopores in the cell walls by monitoring both chemical treatments and process conditions, for instance, the treatment time, the concentration of chemical solutions, the pH value, and the temperature. The elimination of wood light scattering is the second step in the fabrication of transparent wood materials, which can be achieved through two-step approaches: i) the polymer impregnation method and ii) the densification method. For the polymer impregnation method, the wood scaffold is treated with polymers having a corresponding refractive index (e.g., PMMA and epoxy resins) under vacuum to obtain the transparent composite material, which can finally be pressed to align the cellulose fibers and reduce interfacial defects in order to have a finished product with high transmittance (>90%) and excellent light-guiding. However, both the solution-based bleaching and the impregnation processes used to produce transparent wood generally consume large amounts of energy and chemicals, including some toxic or pollutant agents, and are difficult to scale up industrially. Here, we report a method to produce optically transparent wood by modifying the lignin structure with a chemical reaction at room temperature using small amounts of hydrogen peroxide in an alkaline environment. This method preserves the lignin, which results only deconjugated and acts as a binder, providing both a strong wood scaffold and suitable porosity for infiltration of biobased polymers while reducing chemical consumption, the toxicity of the reagents used, polluting waste, petroleum by-products, energy and processing time. The resulting transparent wood demonstrates high transmittance and low thermal conductivity. Through the combination of process efficiency and scalability, the obtained materials are promising candidates for application in the field of construction for modern energy-efficient buildings.

Keywords: bleached wood, energy-efficient components, hydrogen peroxide, transparent wood, wood composites

Procedia PDF Downloads 52
52 Subway Ridership Estimation at a Station-Level: Focus on the Impact of Bus Demand, Commercial Business Characteristics and Network Topology

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The primary purpose of this study is to develop a methodological framework to predict daily subway ridership at a station-level and to examine the association between subway ridership and bus demand incorporating commercial business facility in the vicinity of each subway station. The socio-economic characteristics, land-use, and built environment as factors may have an impact on subway ridership. However, it should be considered not only the endogenous relationship between bus and subway demand but also the characteristics of commercial business within a subway station’s sphere of influence, and integrated transit network topology. Regarding a statistical approach to estimate subway ridership at a station level, therefore it should be considered endogeneity and heteroscedastic issues which might have in the subway ridership prediction model. This study focused on both discovering the impacts of bus demand, commercial business characteristics, and network topology on subway ridership and developing more precise subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers entire Seoul city in South Korea and includes 243 stations with the temporal scope set at twenty-four hours with one-hour interval time panels each. The data for subway and bus ridership was collected Seoul Smart Card data from 2015 and 2016. Three-Stage Least Square(3SLS) approach was applied to develop daily subway ridership model as capturing the endogeneity and heteroscedasticity between bus and subway demand. Independent variables incorporating in the modeling process were commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. As a result, it was found that bus ridership and subway ridership were endogenous each other and they had a significantly positive sign of coefficients which means one transit mode could increase another transportation mode’s ridership. In other words, two transit modes of subway and bus have a mutual relationship instead of the competitive relationship. The commercial business characteristics are the most critical dimension among the independent variables. The variables of commercial business facility rate in the paper containing six types; medical, educational, recreational, financial, food service, and shopping. From the model result, a higher rate in medical, financial buildings, shopping, and food service facility lead to increment of subway ridership at a station, while recreational and educational facility shows lower subway ridership. The complex network theory was applied for estimating integrated network topology measures that cover the entire Seoul transit network system, and a framework for seeking an impact on subway ridership. The centrality measures were found to be significant and showed a positive sign indicating higher centrality led to more subway ridership at a station level. The results of model accuracy tests by out of samples provided that 3SLS model has less mean square error rather than OLS and showed the methodological approach for the 3SLS model was plausible to estimate more accurate subway ridership. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (2017R1C1B2010175).

Keywords: subway ridership, bus ridership, commercial business characteristic, endogeneity, network topology

Procedia PDF Downloads 144
51 Spatial Variation in Urbanization and Slum Development in India: Issues and Challenges in Urban Planning

Authors: Mala Mukherjee

Abstract:

Background: India is urbanizing very fast and urbanisation in India is treated as one of the most crucial components of economic growth. Though the pace of urbanisation (31.6 per cent in 2011) is however slower and lower than the average for Asia but the absolute number of people residing in cities and towns has increased substantially. Rapid urbanization leads to urban poverty and it is well represented in slums. Currently India has four metropolises and 53 million plus cities. All of them have significant slum population but the standard of living and success of slum development programmes varies across regions. Objectives: Objectives of the paper are to show how urbanisation and slum development varies across space; to show spatial variation in the standard of living in Indian slums; to analyse how the implementation of slum development policies like JNNURM, Rajiv Awas Yojana varies across cities and bring different results in different regions and what are the factors responsible for such variation. Data Sources and Methodology: Census 2011 data on urban population and slum households and amenities have been used for analysing the regional variation of urbanisation in 53 million plus cities of India. Special focus has been put on Kolkata Metropolitan Area. Statistical techniques like z-score and PCA have been employed to work out Standard of Living Deprivation score for all the slums of 53 metropolises. ARC-GIS software is used for making maps. Standard of living has been measured in terms of access to basic amenities, infrastructure and assets like drinking water, sanitation, housing condition, bank account, and so on. Findings: 1. The first finding reveals that migration and urbanization is very high in Greater Mumbai, Delhi, Bangaluru, Chennai, Hyderabad and Kolkata; but slum population is high in Greater Mumbai (50% population live in slums), Meerut, Faridabad, Ludhiana, Nagpur, Kolkata etc. Though the rate of urbanization is high in southern and western states but the percentage of slum population is high in northern states (except Greater Mumbai). 2. Standard of Living also varies widely. Slums of Greater Mumbai and North Indian Cities score fairly high in the index indicating the fact that standard of living is high in those slums compare to the slums in eastern India (Dhanbad, Jamshedpur, Kolkata). Therefore, though Kolkata have relatively lesser percentage of slum population compare to north and south Indian cities but the standard of living in Kolkata’s slums is deplorable. 3. It is interesting to note that even within Kolkata Metropolitan Area slums located in the southern and eastern municipal towns like Rajpur-Sonarpur, Pujali, Diamond Harbour, Baduria and Dankuni have lower standard of living compare to the slums located in the Hooghly Industrial belt like Titagarh, Rishrah, Srerampore etc. Slums of the Hooghly Industrial Belt are older than the slums located in eastern and southern part of the urban agglomeration. 4. Therefore, urban development and emergence of slums should not be the only issue of urban governance but standard of living should be the main focus. Slums located in the main cities like Delhi, Mumbai, Kolkata get more attention from the urban planners and similarly, older slums in a city receives greater political attention compare to the slums of smaller cities and newly emerged slums of the peripheral parts.

Keywords: urbanisation, slum, spatial variation, India

Procedia PDF Downloads 359
50 The Effects of Circadian Rhythms Change in High Latitudes

Authors: Ekaterina Zvorykina

Abstract:

Nowadays, Arctic and Antarctic regions are distinguished to be one of the most important strategic resources for global development. Nonetheless, living conditions in Arctic regions still demand certain improvements. As soon as the region is rarely populated, one of the main points of interest is health accommodation of the people, who migrate to Arctic region for permanent and shift work. At Arctic and Antarctic latitudes, personnel face polar day and polar night conditions during the time of the year. It means that they are deprived of natural sunlight in winter season and have continuous daylight in summer. Firstly, the change in light intensity during 24-hours period due to migration affects circadian rhythms. Moreover, the controlled artificial light in winter is also an issue. The results of the recent studies on night shift medical professionals, who were exposed to permanent artificial light, have already demonstrated higher risks in cancer, depression, Alzheimer disease. Moreover, people exposed to frequent time zones change are also subjected to higher risks of heart attack and cancer. Thus, our main goals are to understand how high latitude work and living conditions can affect human health and how it can be prevented. In our study, we analyze molecular and cellular factors, which play important role in circadian rhythm change and distinguish main risk groups in people, migrating to high latitudes. The main well-studied index of circadian timing is melatonin or its metabolite 6-sulfatoxymelatonin. In low light intensity melatonin synthesis is disturbed and as a result human organism requires more time for sleep, which is still disregarded when it comes to working time organization. Lack of melatonin also causes shortage in serotonin production, which leads to higher depression risk. Melatonin is also known to inhibit oncogenes and increase apoptosis level in cells, the main factors for tumor growth, as well as circadian clock genes (for example Per2). Thus, people who work in high latitudes can be distinguished as a risk group for cancer diseases and demand more attention. Clock/Clock genes, known to be one of the main circadian clock regulators, decrease sensitivity of hypothalamus to estrogen and decrease glucose sensibility, which leads to premature aging and oestrous cycle disruption. Permanent light exposure also leads to accumulation superoxide dismutase and oxidative stress, which is one of the main factors for early dementia and Alzheimer disease. We propose a new screening system adjusted for people, migrating from middle to high latitudes and accommodation therapy. Screening is focused on melatonin and estrogen levels, sleep deprivation and neural disorders, depression level, cancer risks and heart and vascular disorders. Accommodation therapy includes different types artificial light exposure, additional melatonin and neuroprotectors. Preventive procedures can lead to increase of migration intensity to high latitudes and, as a result, the prosperity of Arctic region.

Keywords: circadian rhythm, high latitudes, melatonin, neuroprotectors

Procedia PDF Downloads 155
49 A Rapid and Greener Analysis Approach Based on Carbonfiber Column System and MS Detection for Urine Metabolomic Study After Oral Administration of Food Supplements 

Authors: Zakia Fatima, Liu Lu, Donghao Li

Abstract:

The analysis of biological fluid metabolites holds significant importance in various areas, such as medical research, food science, and public health. Investigating the levels and distribution of nutrients and their metabolites in biological samples allows researchers and healthcare professionals to determine nutritional status, find hypovitaminosis or hypervitaminosis, and monitor the effectiveness of interventions such as dietary supplementation. Moreover, analysis of nutrient metabolites provides insight into their metabolism, bioavailability, and physiological processes, aiding in the clarification of their health roles. Hence, the exploration of a distinct, efficient, eco-friendly, and simpler methodology is of great importance to evaluate the metabolic content of complex biological samples. In this work, a green and rapid analytical method based on an automated online two-dimensional microscale carbon fiber/activated carbon fiber fractionation system and time-of-flight mass spectrometry (2DμCFs-TOF-MS) was used to evaluate metabolites of urine samples after oral administration of food supplements. The automated 2DμCFs instrument consisted of a microcolumn system with bare carbon fibers and modified carbon fiber coatings. Carbon fibers and modified carbon fibers exhibit different surface characteristics and retain different compounds accordingly. Three kinds of mobile-phase solvents were used to elute the compounds of varied chemical heterogeneities. The 2DμCFs separation system has the ability to effectively separate different compounds based on their polarity and solubility characteristics. No complicated sample preparation method was used prior to analysis, which makes the strategy more eco-friendly, practical, and faster than traditional analysis methods. For optimum analysis results, mobile phase composition, flow rate, and sample diluent were optimized. Water-soluble vitamins, fat-soluble vitamins, and amino acids, as well as 22 vitamin metabolites and 11 vitamin metabolic pathway-related metabolites, were found in urine samples. All water-soluble vitamins except vitamin B12 and vitamin B9 were detected in urine samples. However, no fat-soluble vitamin was detected, and only one metabolite of Vitamin A was found. The comparison with a blank urine sample showed a considerable difference in metabolite content. For example, vitamin metabolites and three related metabolites were not detected in blank urine. The complete single-run screening was carried out in 5.5 minutes with the minimum consumption of toxic organic solvent (0.5 ml). The analytical method was evaluated in terms of greenness, with an analytical greenness (AGREE) score of 0.72. The method’s practicality has been investigated using the Blue Applicability Grade Index (BAGI) tool, obtaining a score of 77. The findings in this work illustrated that the 2DµCFs-TOF-MS approach could emerge as a fast, sustainable, practical, high-throughput, and promising analytical tool for screening and accurate detection of various metabolites, pharmaceuticals, and ingredients in dietary supplements as well as biological fluids.

Keywords: metabolite analysis, sustainability, carbon fibers, urine.

Procedia PDF Downloads 24
48 Audience Members' Perspective-Taking Predicts Accurate Identification of Musically Expressed Emotion in a Live Improvised Jazz Performance

Authors: Omer Leshem, Michael F. Schober

Abstract:

This paper introduces a new method for assessing how audience members and performers feel and think during live concerts, and how audience members' recognized and felt emotions are related. Two hypotheses were tested in a live concert setting: (1) that audience members’ cognitive perspective taking ability predicts their accuracy in identifying an emotion that a jazz improviser intended to express during a performance, and (2) that audience members' affective empathy predicts their likelihood of feeling the same emotions as the performer. The aim was to stage a concert with audience members who regularly attend live jazz performances, and to measure their cognitive and affective reactions during the performance as non-intrusively as possible. Pianist and Grammy nominee Andy Milne agreed, without knowing details of the method or hypotheses, to perform a full-length solo improvised concert that would include an ‘unusual’ piece. Jazz fans were recruited through typical advertising for New York City jazz performances. The event was held at the New School’s Glass Box Theater, the home of leading NYC jazz venue ‘The Stone.’ Audience members were charged typical NYC jazz club admission prices; advertisements informed them that anyone who chose to participate in the study would be reimbursed their ticket price after the concert. The concert, held in April 2018, had 30 attendees, 23 of whom participated in the study. Twenty-two minutes into the concert, the performer was handed a paper note with the instruction: ‘Perform a 3-5-minute improvised piece with the intention of conveying sadness.’ (Sadness was chosen based on previous music cognition lab studies, where solo listeners were less likely to select sadness as the musically-expressed emotion accurately from a list of basic emotions, and more likely to misinterpret sadness as tenderness). Then, audience members and the performer were invited to respond to a questionnaire from a first envelope under their seat. Participants used their own words to describe the emotion the performer had intended to express, and then to select the intended emotion from a list. They also reported the emotions they had felt while listening using Izard’s differential emotions scale. The concert then continued as usual. At the end, participants answered demographic questions and Davis’ interpersonal reactivity index (IRI), a 28-item scale designed to assess both cognitive and affective empathy. Hypothesis 1 was supported: audience members with greater cognitive empathy were more likely to accurately identify sadness as the expressed emotion. Moreover, audience members who accurately selected ‘sadness’ reported feeling marginally sadder than people who did not select sadness. Hypotheses 2 was not supported; audience members with greater affective empathy were not more likely to feel the same emotions as the performer. If anything, members with lower cognitive perspective-taking ability had marginally greater emotional overlap with the performer, which makes sense given that these participants were less likely to identify the music as sad, which corresponded with the performer’s actual feelings. Results replicate findings from solo lab studies in a concert setting and demonstrate the viability of exploring empathy and collective cognition in improvised live performance.

Keywords: audience, cognition, collective cognition, emotion, empathy, expressed emotion, felt emotion, improvisation, live performance, recognized emotion

Procedia PDF Downloads 131
47 Influence of the Local External Pressure on Measured Parameters of Cutaneous Microcirculation

Authors: Irina Mizeva, Elena Potapova, Viktor Dremin, Mikhail Mezentsev, Valeri Shupletsov

Abstract:

The local tissue perfusion is regulated by the microvascular tone which is under the control of a number of physiological mechanisms. Laser Doppler flowmetry (LDF) together with wavelet analyses is the most commonly used technique to study the regulatory mechanisms of cutaneous microcirculation. External factors such as temperature, local pressure of the probe on the skin, etc. influence on the blood flow characteristics and are used as physiological tests to evaluate microvascular regulatory mechanisms. Local probe pressure influences on the microcirculation parameters measured by optical methods: diffuse reflectance spectroscopy, fluorescence spectroscopy, and LDF. Therefore, further study of probe pressure effects can be useful to improve the reliability of optical measurement. During pressure tests variation of the mean perfusion measured by means of LDF usually is estimated. An additional information concerning the physiological mechanisms of the vascular tone regulation system in response to local pressure can be obtained using spectral analyses of LDF samples. The aim of the present work was to develop protocol and algorithm of data processing appropriate for study physiological response to the local pressure test. Involving 6 subjects (20±2 years) and providing 5 measurements for every subject we estimated intersubject and-inter group variability of response of both averaged and oscillating parts of the LDF sample on external surface pressure. The final purpose of the work was to find special features which further can be used in wider clinic studies. The cutaneous perfusion measurements were carried out by LAKK-02 (SPE LAZMA Ltd., Russia), the skin loading was provided by the originally designed device which allows one to distribute the pressure around the LDF probe. The probe was installed on the dorsal part of the distal finger of the index figure. We collected measurements continuously for one hour and varied loading from 0 to 180mmHg stepwise with a step duration of 10 minutes. Further, we post-processed the samples using the wavelet transform and traced the energy of oscillations in five frequency bands over time. Weak loading leads to pressure-induced vasodilation, so one should take into account that the perfusion measured under pressure conditions will be overestimated. On the other hand, we revealed a decrease in endothelial associated fluctuations. Further loading (88 mmHg) induces amplification of pulsations in all frequency bands. We assume that such loading leads to a higher number of closed capillaries, higher input of arterioles in the LDF signal and as a consequence more vivid oscillations which mainly are formed in arterioles. External pressure higher than 144 mmHg leads to the decrease of oscillating components, after removing the loading very rapid restore of the tissue perfusion takes place. In this work, we have demonstrated that local skin loading influence on the microcirculation parameters measured by optic technique; this should be taken into account while developing portable electronic devices. The proposed protocol of local loading allows one to evaluate PIV as far as to trace dynamic of blood flow oscillations. This study was supported by the Russian Science Foundation under project N 18-15-00201.

Keywords: blood microcirculation, laser Doppler flowmetry, pressure-induced vasodilation, wavelet analyses blood

Procedia PDF Downloads 150
46 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 233
45 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France

Authors: Bensaid A., Mostephaoui T., Nedjai R.

Abstract:

Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.

Keywords: land development, GIS, sand dunes, segmentation, remote sensing

Procedia PDF Downloads 69
44 Degradation of Diclofenac in Water Using FeO-Based Catalytic Ozonation in a Modified Flotation Cell

Authors: Miguel A. Figueroa, José A. Lara-Ramos, Miguel A. Mueses

Abstract:

Pharmaceutical residues are a section of emerging contaminants of anthropogenic origin that are present in a myriad of waters with which human beings interact daily and are starting to affect the ecosystem directly. Conventional waste-water treatment systems are not capable of degrading these pharmaceutical effluents because their designs cannot handle the intermediate products and biological effects occurring during its treatment. That is why it is necessary to hybridize conventional waste-water systems with non-conventional processes. In the specific case of an ozonation process, its efficiency highly depends on a perfect dispersion of ozone, long times of interaction of the gas-liquid phases and the size of the ozone bubbles formed through-out the reaction system. In order to increase the efficiency of these parameters, the use of a modified flotation cell has been proposed recently as a reactive system, which is used at an industrial level to facilitate the suspension of particles and spreading gas bubbles through the reactor volume at a high rate. The objective of the present work is the development of a mathematical model that can closely predict the kinetic rates of reactions taking place in the flotation cell at an experimental scale by means of identifying proper reaction mechanisms that take into account the modified chemical and hydrodynamic factors in the FeO-catalyzed Ozonation of Diclofenac aqueous solutions in a flotation cell. The methodology is comprised of three steps: an experimental phase where a modified flotation cell reactor is used to analyze the effects of ozone concentration and loading catalyst over the degradation of Diclofenac aqueous solutions. The performance is evaluated through an index of utilized ozone, which relates the amount of ozone supplied to the system per milligram of degraded pollutant. Next, a theoretical phase where the reaction mechanisms taking place during the experiments must be identified and proposed that details the multiple direct and indirect reactions the system goes through. Finally, a kinetic model is obtained that can mathematically represent the reaction mechanisms with adjustable parameters that can be fitted to the experimental results and give the model a proper physical meaning. The expected results are a robust reaction rate law that can simulate the improved results of Diclofenac mineralization on water using the modified flotation cell reactor. By means of this methodology, the following results were obtained: A robust reaction pathways mechanism showcasing the intermediates, free-radicals and products of the reaction, Optimal values of reaction rate constants that simulated Hatta numbers lower than 3 for the system modeled, degradation percentages of 100%, TOC (Total organic carbon) removal percentage of 69.9 only requiring an optimal value of FeO catalyst of 0.3 g/L. These results showed that a flotation cell could be used as a reactor in ozonation, catalytic ozonation and photocatalytic ozonation processes, since it produces high reaction rate constants and reduces mass transfer limitations (Ha > 3) by producing microbubbles and maintaining a good catalyst distribution.

Keywords: advanced oxidation technologies, iron oxide, emergent contaminants, AOTS intensification

Procedia PDF Downloads 111
43 Role of Toll Like Receptor-2 in Female Genital Tuberculosis Disease Infection and Its Severity

Authors: Swati Gautam, Salman Akhtar, S. P. Jaiswar, Amita Jain

Abstract:

Background: FGTB is now a major global health problem mostly in developing countries including India. In humans, Mycobacterium Tuberculosis (M.tb) is a causating agent of infection. High index of suspicion is required for early diagnosis due to asymptomatic presentation of FGTB disease. In macrophages Toll Like Receptor-2 (TLR-2) is one which mediated host’s immune response to M.tb. The expression of TLR-2 on macrophages is important to determine the fate of innate immune responses to M.tb. TLR-2 have two work. First its high expression on macrophages worsen the outer of infection and another side, it maintains M.tb to its dormant stage avoids activation of M.tb from latent phase. Single Nucleotide Polymorphism (SNP) of TLR-2 gene plays an important role in susceptibility to TB among different populations and subsequently, in the development of infertility. Methodology: This Case-Control study was done in the Department of Obs and Gynae and Department of Microbiology at King George’s Medical University, U.P, Lucknow, India. Total 300 subjects (150 Cases and 150 Controls) were enrolled in the study. All subjects were enrolled only after fulfilling the given inclusion and exclusion criteria. Inclusion criteria: Age 20-35 years, menstrual-irregularities, positive on Acid-Fast Bacilli (AFB), TB-PCR, (LJ/MGIT) culture in Endometrial Aspiration (EA). Exclusion criteria: Koch’s active, on ATT, PCOS, and Endometriosis fibroid women, positive on Gonococal and Chlamydia. Blood samples were collected in EDTA tubes from cases and healthy control women (HCW) and genomic DNA extraction was carried out by salting-out method. Genotyping of TLR2 genetic variants (Arg753Gln and Arg677Trp) were performed by using single amplification refractory mutation system (ARMS) PCR technique. PCR products were analyzed by electrophoresis on 1.2% agarose gel and visualized by gel-doc. Statistical analysis of the data was performed using the SPSS 16.3 software and computing odds ratio (OR) with 95% CI. Linkage Disequiliribium (LD) analysis was done by SNP stats online software. Results: In TLR-2 (Arg753Gln) polymorphism significant risk of FGTB observed with GG homozygous mutant genotype (OR=13, CI=0.71-237.7, p=0.05), AG heterozygous mutant genotype (OR=13.7, CI=0.76-248.06, p=0.03) however, G allele (OR=1.09, CI=0.78-1.52, p=0.67) individually was not associated with FGTB. In TLR-2 (Arg677Trp) polymorphism a significant risk of FGTB observed with TT homozygous mutant genotype (OR= 0.020, CI=0.001-0.341, p < 0.001), CT heterozygous mutant genotype (OR=0.53, CI=0.33-0.86, p=0.014) and T allele (OR=0.463, CI=0.32-0.66, p < 0.001). TT mutant genotype was only found in FGTB cases and frequency of CT heterozygous more in control group as compared to FGTB group. So, CT genotype worked as protective mutation for FGTB susceptibility group. In haplotype analysis of TLR-2 genetic variants, four possible combinations, i.e. (G-T, A-C, G-C, and A-T) were obtained. The frequency of haplotype A-C was significantly higher in FGTB cases (0.32). Control group did not show A-C haplotype and only found in FGTB cases. Conclusion: In conclusion, study showed a significant association with both genetic variants of TLR-2 of FGTB disease. Moreover, the presence of specific associated genotype/alleles suggest the possibility of disease severity and clinical approach aimed to prevent extensive damage by disease and also helpful for early detection of disease.

Keywords: ARMS, EDTA, FGTB, TLR

Procedia PDF Downloads 303
42 A Review on Cyberchondria Based on Bibliometric Analysis

Authors: Xiaoqing Peng, Aijing Luo, Yang Chen

Abstract:

Background: Cyberchondria, as an "emerging risk" accompanied by the information era, is a new abnormal pattern characterized by excessive or repeated online searches for health-related information and escalating health anxiety, which endangers people's physical and mental health and poses a huge threat to public health. Objective: To explore and discuss the research status, hotspots and trends of Cyberchondria. Methods: Based on a total of 77 articles regarding "Cyberchondria" extracted from Web of Science from the beginning till October 2019, the literature trends, countries, institutions, hotspots are analyzed by bibliometric analysis, the concept definition of Cyberchondria, instruments, relevant factors, treatment and intervention are discussed as well. Results: Since "Cyberchondria" was put forward for the first time in 2001, the last two decades witnessed a noticeable increase in the amount of literature, especially during 2014-2019, it quadrupled dramatically at 62 compared with that before 2014 only at 15, which shows that Cyberchondria has become a new theme and hot topic in recent years. The United States was the most active contributor with the largest publication (23), followed by England (11) and Australia (11), while the leading institutions were Baylor University(7) and University of Sydney(7), followed by Florida State University(4) and University of Manchester(4). The WoS categories "Psychiatry/Psychology " and "Computer/ Information Science "were the areas of greatest influence. The concept definition of Cyberchondria is not completely unified in the world, but it is generally considered as an abnormal behavioral pattern and emotional state and has been invoked to refer to the anxiety-amplifying effects of online health-related searches. The first and the most frequently cited scale for measuring the severity of Cyberchondria called “The Cyberchondria Severity Scale (CSS) ”was developed in 2014, which conceptualized Cyberchondria as a multidimensional construct consisting of compulsion, distress, excessiveness, reassurance, and mistrust of medical professionals which was proved to be not necessary for this construct later. Since then, the Brazilian, German, Turkish, Polish and Chinese versions were subsequently developed, improved and culturally adjusted, while CSS was optimized to a simplified version (CSS-12) in 2019, all of which should be worthy of further verification. The hotspots of Cyberchondria mainly focuses on relevant factors as follows: intolerance of uncertainty, anxiety sensitivity, obsessive-compulsive disorder, internet addition, abnormal illness behavior, Whiteley index, problematic internet use, trying to make clear the role played by “associated factors” and “anxiety-amplifying factors” in the development of Cyberchondria, to better understand the aetiological links and pathways in the relationships between hypochondriasis, health anxiety and online health-related searches. Although the treatment and intervention of Cyberchondria are still in the initial stage of exploration, there are kinds of meaningful attempts to seek effective strategies from different aspects such as online psychological treatment, network technology management, health information literacy improvement and public health service. Conclusion: Research on Cyberchondria is in its infancy but should be deserved more attention. A conceptual consensus on Cyberchondria, a refined assessment tool, prospective studies conducted in various populations, targeted treatments for it would be the main research direction in the near future.

Keywords: cyberchondria, hypochondriasis, health anxiety, online health-related searches

Procedia PDF Downloads 122
41 Implementation of Cord- Blood Derived Stem Cells in the Regeneration of Two Experimental Models: Carbon Tetrachloride and S. Mansoni Induced Liver Fibrosis

Authors: Manal M. Kame, Zeinab A. Demerdash, Hanan G. El-Baz, Salwa M. Hassan, Faten M. Salah, Wafaa Mansour, Olfat Hammam

Abstract:

Cord blood (CB) derived Unrestricted Somatic Stem Cells (USSCs) with their multipotentiality hold great promise in liver regeneration. This work aims at evaluation of the therapeutic potentiality of USSCs in two experimental models of chronic liver injury induced either by S. mansoni infection in balb/c mice or CCL4 injection in hamsters. Isolation, propagation, and characterization of USSCs from CB samples were performed. USSCs were induced to differentiate into osteoblasts, adipocytes and hepatocyte-like cells. Cells of the third passage were transplanted in two models of liver fibrosis: (1) Twenty hamsters were induced to liver fibrosis by repeated i. p. injection of 100 μl CCl4 /hamster for 8 weeks. This model was designed as; 10 hamsters with liver fibrosis and treated with i.h. injection of 3x106 USSCs (USSCs transplanted group), 10 hamsters with liver fibrosis (pathological control group), and 10 hamsters with healthy livers (normal control group). (2) Murine chronics S.mansoni model: twenty mice were induced to liver fibrosis with S. mansoni ceracariae (60 cercariae/ mouse) using the tail immersion method and left for 12 weeks. This model was designed as; 10 mice with liver fibrosis were transplanted with i. v. injection of 1×106 USCCs (USSCs transplanted group). Other 2 groups were designed as in hamsters model. Animals were sacrificed 12 weeks after USSCs transplantation, and their liver sections were examined for detection of human hepatocyte-like cells by immunohistochemistry staining. Moreover, liver sections were examined for fibrosis level, and fibrotic indices were calculated. Sera of sacrificed animals were tested for liver functions. CB USSCs, with fibroblast-like morphology, expressed high levels of CD44, CD90, CD73 and CD105 and were negative for CD34, CD45, and HLA-DR. USSCs showed high expression of transcripts for Oct4 and Sox2 and were in vitro differentiated into osteoblasts, adipocytes. In both animal models, in vitro induced hepatocyte-like cells were confirmed by cytoplasmic expression of glycogen, alpha-fetoprotein, and cytokeratin18. Livers of USSCs transplanted group showed engraftment with human hepatocyte-like cells as proved by cytoplasmic expression of human alpha-fetoprotein, cytokeratin18, and OV6. In addition, livers of this group showed less fibrosis than the pathological control group. Liver functions in the form of serum AST & ALT level and serum total bilirubin level were significantly lowered in USSCs transplanted group than pathological control group (p < 0.001). Moreover, the fibrotic index was significantly lower (p< 0.001) in USSCs transplanted group than pathological control group. In addition liver sections, of i. v. injection of 1×106 USCCs of mice, stained with either H&E or sirius red showed diminished granuloma size and a relative decrease in hepatic fibrosis. Our experimental liver fibrosis models transplanted with CB-USSCs showed liver engraftment with human hepatocyte-like cells as well as signs of liver regeneration in the form of improvement in liver function assays and fibrosis level. These data provide hope that human CB- derived USSCs are introduced as multipotent stem cells with great potentiality in regenerative medicine & strengthens the concept of cellular therapy for the treatment of liver fibrosis.

Keywords: cord blood, liver fibrosis, stem cells, transplantation

Procedia PDF Downloads 308
40 A Proposed Treatment Protocol for the Management of Pars Interarticularis Pathology in Children and Adolescents

Authors: Paul Licina, Emma M. Johnston, David Lisle, Mark Young, Chris Brady

Abstract:

Background: Lumbar pars pathology is a common cause of pain in the growing spine. It can be seen in young athletes participating in at-risk sports and can affect sporting performance and long-term health due to its resistance to traditional management. There is a current lack of consensus of classification and treatment for pars injuries. Previous systems used CT to stage pars defects but could not assess early stress reactions. A modified classification is proposed that considers findings on MRI, significantly improving early treatment guidance. The treatment protocol is designed for patients aged 5 to 19 years. Method: Clinical screening identifies patients with a low, medium, or high index of suspicion for lumbar pars injury using patient age, sport participation and pain characteristics. MRI of the at-risk cohort enables augmentation of existing CT-based classification while avoiding ionising radiation. Patients are classified into five categories based on MRI findings. A type 0 lesion (stress reaction) is present when CT is normal and MRI shows high signal change (HSC) in the pars/pedicle on T2 images. A type 1 lesion represents the ‘early defect’ CT classification. The group previously referred to as a 'progressive stage' defect on CT can be split into 2A and 2B categories. 2As have HSC on MRI, whereas 2Bs do not. This distinction is important with regard to healing potential. Type 3 lesions are terminal stage defects on CT, characterised by pseudarthrosis. MRI shows no HSC. Results: Stress reactions (type 0) and acute fractures (1 and 2a) can heal and are treated in a custom-made hard brace for 12 weeks. It is initially worn 23 hours per day. At three weeks, patients commence basic core rehabilitation. At six weeks, in the absence of pain, the brace is removed for sleeping. Exercises are progressed to positions of daily living. Patients with continued pain remain braced 23 hours per day without exercise progression until becoming symptom-free. At nine weeks, patients commence supervised exercises out of the brace for 30 minutes each day. This allows them to re-learn muscular control without rigid support of the brace. At 12 weeks, bracing ceases and MRI is repeated. For patients with near or complete resolution of bony oedema and healing of any cortical defect, rehabilitation is focused on strength and conditioning and sport-specific exercise for the full return to activity. The length of this final stage is approximately nine weeks but depends on factors such as development and level of sports participation. If significant HSC remains on MRI, CT scan is considered to definitively assess cortical defect healing. For these patients, return to high-risk sports is delayed for up to three months. Chronic defects (2b and 3) cannot heal and are not braced, and rehabilitation follows traditional protocols. Conclusion: Appropriate clinical screening and imaging with MRI can identify pars pathology early. In those with potential for healing, we propose hard bracing and appropriate rehabilitation as part of a multidisciplinary management protocol. The validity of this protocol will be tested in future studies.

Keywords: adolescents, MRI classification, pars interticularis, treatment protocol

Procedia PDF Downloads 152
39 A Bibliometric Analysis of Ukrainian Research Articles on SARS-COV-2 (COVID-19) in Compliance with the Standards of Current Research Information Systems

Authors: Sabina Auhunas

Abstract:

These days in Ukraine, Open Science dramatically develops for the sake of scientists of all branches, providing an opportunity to take a more close look on the studies by foreign scientists, as well as to deliver their own scientific data to national and international journals. However, when it comes to the generalization of data on science activities by Ukrainian scientists, these data are often integrated into E-systems that operate inconsistent and barely related information sources. In order to resolve these issues, developed countries productively use E-systems, designed to store and manage research data, such as Current Research Information Systems that enable combining uncompiled data obtained from different sources. An algorithm for selecting SARS-CoV-2 research articles was designed, by means of which we collected the set of papers published by Ukrainian scientists and uploaded by August 1, 2020. Resulting metadata (document type, open access status, citation count, h-index, most cited documents, international research funding, author counts, the bibliographic relationship of journals) were taken from Scopus and Web of Science databases. The study also considered the info from COVID-19/SARS-CoV-2-related documents published from December 2019 to September 2020, directly from documents published by authors depending on territorial affiliation to Ukraine. These databases are enabled to get the necessary information for bibliometric analysis and necessary details: copyright, which may not be available in other databases (e.g., Science Direct). Search criteria and results for each online database were considered according to the WHO classification of the virus and the disease caused by this virus and represented (Table 1). First, we identified 89 research papers that provided us with the final data set after consolidation and removing duplication; however, only 56 papers were used for the analysis. The total number of documents by results from the WoS database came out at 21641 documents (48 affiliated to Ukraine among them) in the Scopus database came out at 32478 documents (41 affiliated to Ukraine among them). According to the publication activity of Ukrainian scientists, the following areas prevailed: Education, educational research (9 documents, 20.58%); Social Sciences, interdisciplinary (6 documents, 11.76%) and Economics (4 documents, 8.82%). The highest publication activity by institution types was reported in the Ministry of Education and Science of Ukraine (its percent of published scientific papers equals 36% or 7 documents), Danylo Halytsky Lviv National Medical University goes next (5 documents, 15%) and P. L. Shupyk National Medical Academy of Postgraduate Education (4 documents, 12%). Basically, research activities by Ukrainian scientists were funded by 5 entities: Belgian Development Cooperation, the National Institutes of Health (NIH, U.S.), The United States Department of Health & Human Services, grant from the Whitney and Betty MacMillan Center for International and Area Studies at Yale, a grant from the Yale Women Faculty Forum. Based on the results of the analysis, we obtained a set of published articles and preprints to be assessed on the variety of features in upcoming studies, including citation count, most cited documents, a bibliographic relationship of journals, reference linking. Further research on the development of the national scientific E-database continues using brand new analytical methods.

Keywords: content analysis, COVID-19, scientometrics, text mining

Procedia PDF Downloads 112
38 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 106
37 A Rare Case of Dissection of Cervical Portion of Internal Carotid Artery, Diagnosed Postpartum

Authors: Bidisha Chatterjee, Sonal Grover, Rekha Gurung

Abstract:

Postpartum dissection of the internal carotid artery is a relatively rare condition and is considered as an underlying aetiology in 5% to 25% of strokes under the age of 30 to 45 years. However, 86% of these cases recover completely and 14% have mild focal neurological symptoms. Prognosis is generally good with early intervention. The risk quoted for a repeat carotid artery dissection in subsequent pregnancies is less than 2%. 36-year Caucasian primipara presented on postnatal day one of forceps delivery with tachycardia. In the intrapartum period she had a history of prolonged rupture of membranes and developed intrapartum sepsis and was treated with antibiotics. Postpartum ECG showed septal inferior T wave inversion and a troponin level of 19. Subsequently Echocardiogram ruled out post-partum cardiomyopathy. Repeat ECG showed improvement of the previous changes and in the absence of symptoms no intervention was warranted. On day 4 post-delivery, she had developed symptoms of droopy right eyelid, pain around the right eye and itching in the right ear. On examination, she had developed right sided ptosis, unequal pupils (Rt miotic pupil). Cranial nerve examination, reflexes, sensory examination and muscle power was normal. Apart from migraine, there was no medical or family history of note. In view of Horner’s on the right, she had a CT Angiogram and subsequently MR/MRA and was diagnosed with dissection of the cervical portion of the right internal carotid artery. She was discharged on a course of Aspirin 75mg. By 6 week post-natal follow up patient had recovered significantly with occasional episodes of unequal pupils and tingling of right toes which resolved spontaneously. Cervical artery dissection, including VAD and carotid artery dissection, are rare complications of pregnancy with an estimated annual incidence of 2.6–3 per 100,000 pregnancy hospitalizations. Aetiology remains unclear though trauma during straining at labour, underlying arterial disease and preeclampsia have been implicated. Hypercoagulable state during pregnancy and puerperium could also be an important factor. 60-90% cases present with severe headache and neck pain and generally precede neurological symptoms like ipsilateral Horner’s syndrome, retroorbital pain, tinnitus and cranial nerve palsy. Although rare, the consequences of delayed diagnosis and management can lead to severe and permanent neurological deficits. Patients with a strong index of suspicion should undergo an MRI or MRA of head and neck. Antithrombotic and antiplatelet therapy forms the mainstay of therapy with selected cases needing endovascular stenting. Long term prognosis is favourable with either complete resolution or minimal deficit if treatment is prompt. Patients should be counselled about the recurrence risk and possibility of stroke in future pregnancy. Coronary artery dissection is rare and treatable but needs early diagnosis and treatment. Post-partum headache and neck pain with neurological symptoms should prompt urgent imaging followed by antithrombotic and /or antiplatelet therapy. Most cases resolve completely or with minimal sequelae.

Keywords: postpartum, dissection of internal carotid artery, magnetic resonance angiogram, magnetic resonance imaging, antiplatelet, antithrombotic

Procedia PDF Downloads 95
36 Design and Construction of a Home-Based, Patient-Led, Therapeutic, Post-Stroke Recovery System Using Iterative Learning Control

Authors: Marco Frieslaar, Bing Chu, Eric Rogers

Abstract:

Stroke is a devastating illness that is the second biggest cause of death in the world (after heart disease). Where it does not kill, it leaves survivors with debilitating sensory and physical impairments that not only seriously harm their quality of life, but also cause a high incidence of severe depression. It is widely accepted that early intervention is essential for recovery, but current rehabilitation techniques largely favor hospital-based therapies which have restricted access, expensive and specialist equipment and tend to side-step the emotional challenges. In addition, there is insufficient funding available to provide the long-term assistance that is required. As a consequence, recovery rates are poor. The relatively unexplored solution is to develop therapies that can be harnessed in the home and are formulated from technologies that already exist in everyday life. This would empower individuals to take control of their own improvement and provide choice in terms of when and where they feel best able to undertake their own healing. This research seeks to identify how effective post-stroke, rehabilitation therapy can be applied to upper limb mobility, within the physical context of a home rather than a hospital. This is being achieved through the design and construction of an automation scheme, based on iterative learning control and the Riener muscle model, that has the ability to adapt to the user and react to their level of fatigue and provide tangible physical recovery. It utilizes a SMART Phone and laptop to construct an iterative learning control (ILC) system, that monitors upper arm movement in three dimensions, as a series of exercises are undertaken. The equipment generates functional electrical stimulation to assist in muscle activation and thus improve directional accuracy. In addition, it monitors speed, accuracy, areas of motion weakness and similar parameters to create a performance index that can be compared over time and extrapolated to establish an independent and objective assessment scheme, plus an approximate estimation of predicted final outcome. To further extend its assessment capabilities, nerve conduction velocity readings are taken by the software, between the shoulder and hand muscles. This is utilized to measure the speed of response of neuron signal transfer along the arm and over time, an online indication of regeneration levels can be obtained. This will prove whether or not sufficient training intensity is being achieved even before perceivable movement dexterity is observed. The device also provides the option to connect to other users, via the internet, so that the patient can avoid feelings of isolation and can undertake movement exercises together with others in a similar position. This should create benefits not only for the encouragement of rehabilitation participation, but also an emotional support network potential. It is intended that this approach will extend the availability of stroke recovery options, enable ease of access at a low cost, reduce susceptibility to depression and through these endeavors, enhance the overall recovery success rate.

Keywords: home-based therapy, iterative learning control, Riener muscle model, SMART phone, stroke rehabilitation

Procedia PDF Downloads 264
35 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 240