Search results for: reactive strength index
114 Voluntary Disclosure Of Sustainability Information In Malaysian Federal-level Statutory Bodies
Authors: Siti Zabedah Saidin, Aidi Ahmi, Azharudin Ali, Wan Norhayati Wan Ahmad
Abstract:
In today's increasingly complex and interconnected world, the concept of sustainability has transcended mere corporate social responsibility, evolving into a fundamental driver of organizational behaviour and disclosure. This content analysis study delves into the Malaysian federal-level statutory bodies’ annual report for the year 2021, aiming to elucidate the extent of sustainability disclosures within the non-financial sections of these reports. The escalating global emphasis on sustainability has prompted organizations to embrace transparency as a means to demonstrate their commitment to environmental, social, and governance (ESG) considerations. Voluntary sustainability disclosure has emerged as a crucial channel through which organizations communicate their efforts, initiatives, and impacts in these areas, thereby fostering trust and accountability with stakeholders. The study aims to identify and examine the types of sustainability information disclosed voluntarily by the federal-level statutory bodies, concentrating on the non-financial sections of the annual reports. To achieve this, the study adopts a simplified disclosure index, a pragmatic tool that quantifies the extent of sustainability reporting in a standardized manner. Using convenience sampling, the study selects a sample of annual reports from the federal-level statutory bodies in Malaysia, as provided on their respective websites. The content analysis is centred on the non-financial sections of these reports, allowing for an in-depth exploration of sustainability disclosures. The findings of the study present the extent to which Malaysian federal-level statutory bodies embrace sustainability reporting. Through thorough content analysis, the study uncovered diverse dimensions of sustainability information, encompassing environmental impact assessments, social engagement endeavours, and governance frameworks. This reveals a deliberate effort by these bodies to encapsulate their holistic organizational contributions and challenges, transcending traditional financial metrics. This research contributes to the existing literature by providing insights into the evolving landscape of sustainability disclosure practices among Malaysian federal-level statutory bodies. The findings underline the proactive nature of these bodies in voluntarily sharing sustainability-related information, reflecting their recognition of the interconnectedness between organizational success and societal well-being. Furthermore, the study underscores the potential influence of regulatory guidelines and societal expectations in shaping the extent and nature of voluntary sustainability disclosures. Organizations are not merely responding to regulatory mandates but are actively aligning with global sustainability goals and stakeholder expectations. As organizations continue to navigate the intricate web of stakeholder expectations and sustainability imperatives, this study enriches the discourse surrounding transparency and sustainability reporting. The analysis emphasizes the important role of non-financial disclosures in portraying a holistic organizational narrative. In an era where stakeholders demand accountability, and the interconnectedness of global challenges necessitates collaborative action, the voluntary disclosure of sustainability information stands as a testament to the commitment of Malaysian federal-level statutory bodies in shaping a more sustainable future.Keywords: voluntary disclosure, sustainability information, annual report, federal-level statutory body
Procedia PDF Downloads 61113 Assessing P0.1 and Occlusion Pressures in Brain-Injured Patients on Pressure Support Ventilation: A Study Protocol
Authors: S. B. R. Slagmulder
Abstract:
Monitoring inspiratory effort and dynamic lung stress in patients on pressure support ventilation in the ICU is important for protecting against self inflicted lung injury (P-SILI) and diaphragm dysfunction. Strategies to address the detrimental effects of respiratory drive and effort can lead to improved patient outcomes. Two non-invasive estimation methods, occlusion pressure (Pocc) and P0.1, have been proposed for achieving lung and diaphragm protective ventilation. However, their relationship and interpretation in neuro ICU patients is not well understood. P0.1 is the airway pressure measured during a 100-millisecond occlusion of the inspiratory port. It reflects the neural drive from the respiratory centers to the diaphragm and respiratory muscles, indicating the patient's respiratory drive during the initiation of each breath. Occlusion pressure, measured during a brief inspiratory pause against a closed airway, provides information about the inspiratory muscles' strength and the system's total resistance and compliance. Research Objective: Understanding the relationship between Pocc and P0.1 in brain-injured patients can provide insights into the interpretation of these values in pressure support ventilation. This knowledge can contribute to determining extubation readiness and optimizing ventilation strategies to improve patient outcomes. The central goal is to asses a study protocol for determining the relationship between Pocc and P0.1 in brain-injured patients on pressure support ventilation and their ability to predict successful extubation. Additionally, comparing these values between brain-damaged and non-brain-damaged patients may provide valuable insights. Key Areas of Inquiry: 1. How do Pocc and P0.1 values correlate within brain injury patients undergoing pressure support ventilation? 2. To what extent can Pocc and P0.1 values serve as predictive indicators for successful extubation in patients with brain injuries? 3. What differentiates the Pocc and P0.1 values between patients with brain injuries and those without? Methodology: P0.1 and occlusion pressures are standard measurements for pressure support ventilation patients, taken by attending doctors as per protocol. We utilize electronic patient records for existing data. Unpaired T-test will be conducted to compare P0.1 and Pocc values between both study groups. Associations between P0.1 and Pocc and other study variables, such as extubation, will be explored with simple regression and correlation analysis. Depending on how the data evolve, subgroup analysis will be performed for patients with and without extubation failure. Results: While it is anticipated that neuro patients may exhibit high respiratory drive, the linkage between such elevation, quantified by P0.1, and successful extubation remains unknown The analysis will focus on determining the ability of these values to predict successful extubation and their potential impact on ventilation strategies. Conclusion: Further research is pending to fully understand the potential of these indices and their impact on mechanical ventilation in different patient populations and clinical scenarios. Understanding these relationships can aid in determining extubation readiness and tailoring ventilation strategies to improve patient outcomes in this specific patient population. Additionally, it is vital to account for the influence of sedatives, neurological scores, and BMI on respiratory drive and occlusion pressures to ensure a comprehensive analysis.Keywords: brain damage, diaphragm dysfunction, occlusion pressure, p0.1, respiratory drive
Procedia PDF Downloads 67112 Modeling Thermal Changes of Urban Blocks in Relation to the Landscape Structure and Configuration in Guilan Province
Authors: Roshanak Afrakhteh, Abdolrasoul Salman Mahini, Mahdi Motagh, Hamidreza Kamyab
Abstract:
Urban Heat Islands (UHIs) are distinctive urban areas characterized by densely populated central cores surrounded by less densely populated peripheral lands. These areas experience elevated temperatures, primarily due to impermeable surfaces and specific land use patterns. The consequences of these temperature variations are far-reaching, impacting the environment and society negatively, leading to increased energy consumption, air pollution, and public health concerns. This paper emphasizes the need for simplified approaches to comprehend UHI temperature dynamics and explains how urban development patterns contribute to land surface temperature variation. To illustrate this relationship, the study focuses on the Guilan Plain, utilizing techniques like principal component analysis and generalized additive models. The research centered on mapping land use and land surface temperature in the low-lying area of Guilan province. Satellite data from Landsat sensors for three different time periods (2002, 2012, and 2021) were employed. Using eCognition software, a spatial unit known as a "city block" was utilized through object-based analysis. The study also applied the normalized difference vegetation index (NDVI) method to estimate land surface radiance. Predictive variables for urban land surface temperature within residential city blocks were identified categorized as intrinsic (related to the block's structure) and neighboring (related to adjacent blocks) variables. Principal Component Analysis (PCA) was used to select significant variables, and a Generalized Additive Model (GAM) approach, implemented using R's mgcv package, modeled the relationship between urban land surface temperature and predictor variables.Notable findings included variations in urban temperature across different years attributed to environmental and climatic factors. Block size, shared boundary, mother polygon area, and perimeter-to-area ratio were identified as main variables for the generalized additive regression model. This model showed non-linear relationships, with block size, shared boundary, and mother polygon area positively correlated with temperature, while the perimeter-to-area ratio displayed a negative trend. The discussion highlights the challenges of predicting urban surface temperature and the significance of block size in determining urban temperature patterns. It also underscores the importance of spatial configuration and unit structure in shaping urban temperature patterns. In conclusion, this study contributes to the growing body of research on the connection between land use patterns and urban surface temperature. Block size, along with block dispersion and aggregation, emerged as key factors influencing urban surface temperature in residential areas. The proposed methodology enhances our understanding of parameter significance in shaping urban temperature patterns across various regions, particularly in Iran.Keywords: urban heat island, land surface temperature, LST modeling, GAM, Gilan province
Procedia PDF Downloads 73111 Momentum Profits and Investor Behavior
Authors: Aditya Sharma
Abstract:
Profits earned from relative strength strategy of zero-cost portfolio i.e. taking long position in winner stocks and short position in loser stocks from recent past are termed as momentum profits. In recent times, there has been lot of controversy and concern about sources of momentum profits, since the existence of these profits acts as an evidence of earning non-normal returns from publicly available information directly contradicting Efficient Market Hypothesis. Literature review reveals conflicting theories and differing evidences on sources of momentum profits. This paper aims at re-examining the sources of momentum profits in Indian capital markets. The study focuses on assessing the effect of fundamental as well as behavioral sources in order to understand the role of investor behavior in stock returns and suggest (if any) improvements to existing behavioral asset pricing models. This Paper adopts calendar time methodology to calculate momentum profits for 6 different strategies with and without skipping a month between ranking and holding period. For each J/K strategy, under this methodology, at the beginning of each month t stocks are ranked on past j month’s average returns and sorted in descending order. Stocks in upper decile are termed winners and bottom decile as losers. After ranking long and short positions are taken in winner and loser stocks respectively and both portfolios are held for next k months, in such manner that at any given point of time we have K overlapping long and short portfolios each, ranked from t-1 month to t-K month. At the end of period, returns of both long and short portfolios are calculated by taking equally weighted average across all months. Long minus short returns (LMS) are momentum profits for each strategy. Post testing for momentum profits, to study the role market risk plays in momentum profits, CAPM and Fama French three factor model adjusted LMS returns are calculated. In the final phase of studying sources, decomposing methodology has been used for breaking up the profits into unconditional means, serial correlations, and cross-serial correlations. This methodology is unbiased, can be used with the decile-based methodology and helps to test the effect of behavioral and fundamental sources altogether. From all the analysis, it was found that momentum profits do exist in Indian capital markets with market risk playing little role in defining them. Also, it was observed that though momentum profits have multiple sources (risk, serial correlations, and cross-serial correlations), cross-serial correlations plays a major role in defining these profits. The study revealed that momentum profits do have multiple sources however, cross-serial correlations i.e. the effect of returns of other stocks play a major role. This means that in addition to studying the investors` reactions to the information of the same firm it is also important to study how they react to the information of other firms. The analysis confirms that investor behavior does play an important role in stock returns and incorporating both the aspects of investors’ reactions in behavioral asset pricing models help make then better.Keywords: investor behavior, momentum effect, sources of momentum, stock returns
Procedia PDF Downloads 302110 The Effects of the GAA15 (Gaelic Athletic Association 15) on Lower Extremity Injury Incidence and Neuromuscular Functional Outcomes in Collegiate Gaelic Games: A 2 Year Prospective Study
Authors: Brenagh E. Schlingermann, Clare Lodge, Paula Rankin
Abstract:
Background: Gaelic football, hurling and camogie are highly popular field games in Ireland. Research into the epidemiology of injury in Gaelic games revealed that approximately three quarters of the injuries in the games occur in the lower extremity. These injuries can have player, team and institutional impacts due to multiple factors including financial burden and time loss from competition. Research has shown it is possible to record injury data consistently with the GAA through a closed online recording system known as the GAA injury surveillance database. It has been established that determining the incidence of injury is the first step of injury prevention. The goals of this study were to create a dynamic GAA15 injury prevention programme which addressed five key components/goals; avoid positions associated with a high risk of injury, enhance flexibility, enhance strength, optimize plyometrics and address sports specific agilities. These key components are internationally recognized through the Prevent Injury, Enhance performance (PEP) programme which has proven reductions in ACL injuries by 74%. In national Gaelic games the programme is known as the GAA15 which has been devised from the principles of the PEP. No such injury prevention strategies have been published on this cohort in Gaelic games to date. This study will investigate the effects of the GAA15 on injury incidence and neuromuscular function in Gaelic games. Methods: A total of 154 players (mean age 20.32 ± 2.84) were recruited from the GAA teams within the Institute of Technology Carlow (ITC). Preseason and post season testing involved two objective screening tests; Y balance test and Three Hop Test. Practical workshops, with ongoing liaison, were provided to the coaches on the implementation of the GAA15. The programme was performed before every training session and game and the existing GAA injury surveillance database was accessed to monitor player’s injuries by the college sports rehabilitation athletic therapist. Retrospective analysis of the ITC clinic records were performed in conjunction with the database analysis as a means of tracking injuries that may have been missed. The effects of the programme were analysed by comparing the intervention groups Y balance and three hop test scores to an age/gender matched control group. Results: Year 1 results revealed significant increases in neuromuscular function as a result of the GAA15. Y Balance test scores for the intervention group increased in both the posterolateral (p=.005 and p=.001) and posteromedial reach directions (p= .001 and p=.001). A decrease in performance was determined for the three hop test (p=.039). Overall twenty-five injuries were reported during the season resulting in an injury rate of 3.00 injuries/1000hrs of participation; 1.25 injuries/1000hrs training and 4.25 injuries/1000hrs match play. Non-contact injuries accounted for 40% of the injuries sustained. Year 2 results are pending and expected April 2016. Conclusion: It is envisaged that implementation of the GAA15 will continue to reduce the risk of injury and improve neuromuscular function in collegiate Gaelic games athletes.Keywords: GAA15, Gaelic games, injury prevention, neuromuscular training
Procedia PDF Downloads 336109 Biotite from Contact-Metamorphosed Rocks of the Dizi Series of the Greater Caucasus
Authors: Irakli Javakhishvili, Tamara Tsutsunava, Giorgi Beridze
Abstract:
The Caucasus is a component of the Mediterranean collision belt. The Dizi series is situated within the Greater Caucasian region of the Caucasus and crops out in the core of the Svaneti anticlinorium. The series was formed in the continental slope conditions on the southern passive margin of the small ocean basin. The Dizi series crops out on about 560 square km with the thickness 2000-2200 m. The rocks are faunally dated from the Devonian to the Triassic inclusive. The series is composed of terrigenous phyllitic schists, sandstones, quartzite aleurolites and lenses and interlayers of marbleized limestones. During the early Cimmerian orogeny, they underwent regional metamorphism of chlorite-sericite subfacies of greenschist facies. Typical minerals of metapelites are chlorite, sericite, augite, quartz, and tourmaline, but of basic rocks - actinolite, fibrolite, prehnite, calcite, and chlorite are developed. Into the Dizi series, polyphase intrusions of gabbros, diorites, quartz-diorites, syenite-diorites, syenites, and granitoids are intruded. Their K-Ar age dating (176-165Ma) points out that their formation corresponds to the Bathonian orogeny. The Dizi series is well-studied geologically, but very complicated processes of its regional and contact metamorphisms are insufficiently investigated. The aim of the authors was a detailed study of contact metamorphism processes of the series rocks. Investigations were accomplished applying the following methodologies: finding of key sections, a collection of material, microscopic study of samples, microprobe and structural analysis of minerals and X-ray determination of elements. The Dizi series rocks formed under the influence of the Bathonian magmatites on metapelites and carbonate-enriched rocks. They are represented by quartz, biotite, sericite, graphite, andalusite, muscovite, plagioclase, corundum, cordierite, clinopyroxene, hornblende, cummingtonite, actinolite, and tremolite bearing hornfels, marbles, and skarns. The contact metamorphism aureole reaches 350 meters. Biotite is developed only in contact-metamorphosed rocks and is a rather informative index mineral. In metapelites, biotite is formed as a result of the reaction between phengite, chlorite, and leucoxene, but in basites, it replaces actinolite or actinolite-hornblende. To study the compositional regularities of biotites, they were investigated from both - metapelites and metabasites. In total, biotite from the basites is characterized by an increased of titanium in contrast to biotite from metapelites. Biotites from metapelites are distinguished by an increased amount of aluminum. In biotites an increased amount of titanium and aluminum is observed as they approximate the contact, while their magnesia content decreases. Metapelite biotites are characterized by an increased amount of alumina in aluminum octahedrals, in contrast to biotite of the basites. In biotites of metapelites, the amount of tetrahedric aluminum is 28–34%, octahedral - 15–26%, and in basites tetrahedral aluminum is 28–33%, and octahedral 7–21%. As a result of the study of minerals, including biotite, from the contact-metamorphosed rocks of the Dizi series three exocontact zones with corresponding mineral assemblages were identified. It was established that contact metamorphism in the aureole of the Dizi series intrusions is going on at a significantly higher temperature and lower pressure than the regional metamorphism preceding the contact metamorphism.Keywords: biotite, contact metamorphism, Dizi series, the Greater Caucasus
Procedia PDF Downloads 131108 Health Reforms in Central and Eastern European Countries: Results, Dynamics, and Outcomes Measure
Authors: Piotr Romaniuk, Krzysztof Kaczmarek, Adam Szromek
Abstract:
Background: A number of approaches to assess the performance of health system have been proposed so far. Nonetheless, they lack a consensus regarding the key components of assessment procedure and criteria of evaluation. The WHO and OECD have developed methods of assessing health system to counteract the underlying issues, but they are not free of controversies and did not manage to produce a commonly accepted consensus. The aim of the study: On the basis of WHO and OECD approaches we decided to develop own methodology to assess the performance of health systems in Central and Eastern European countries. We have applied the method to compare the effects of health systems reforms in 20 countries of the region, in order to evaluate the dynamic of changes in terms of health system outcomes.Methods: Data was collected from a 25-year time period after the fall of communism, subsetted into different post-reform stages. Datasets collected from individual countries underwent one-, two- or multi-dimensional statistical analyses, and the Synthetic Measure of health system Outcomes (SMO) was calculated, on the basis of the method of zeroed unitarization. A map of dynamics of changes over time across the region was constructed. Results: When making a comparative analysis of the tested group in terms of the average SMO value throughout the analyzed period, we noticed some differences, although the gaps between individual countries were small. The countries with the highest SMO were the Czech Republic, Estonia, Poland, Hungary and Slovenia, while the lowest was in Ukraine, Russia, Moldova, Georgia, Albania, and Armenia. Countries differ in terms of the range of SMO value changes throughout the analyzed period. The dynamics of change is high in the case of Estonia and Latvia, moderate in the case of Poland, Hungary, Czech Republic, Croatia, Russia and Moldova, and small when it comes to Belarus, Ukraine, Macedonia, Lithuania, and Georgia. This information reveals fluctuation dynamics of the measured value in time, yet it does not necessarily mean that in such a dynamic range an improvement appears in a given country. In reality, some of the countries moved from on the scale with different effects. Albania decreased the level of health system outcomes while Armenia and Georgia made progress, but lost distance to leaders in the region. On the other hand, Latvia and Estonia showed the most dynamic progress in improving the outcomes. Conclusions: Countries that have decided to implement comprehensive health reform have achieved a positive result in terms of further improvements in health system efficiency levels. Besides, a higher level of efficiency during the initial transition period generally positively determined the subsequent value of the efficiency index value, but not the dynamics of change. The paths of health system outcomes improvement are highly diverse between different countries. The instrument we propose constitutes a useful tool to evaluate the effectiveness of reform processes in post-communist countries, but more studies are needed to identify factors that may determine results obtained by individual countries, as well as to eliminate the limitations of methodology we applied.Keywords: health system outcomes, health reforms, health system assessment, health system evaluation
Procedia PDF Downloads 289107 Establishing Correlation between Urban Heat Island and Urban Greenery Distribution by Means of Remote Sensing and Statistics Data to Prioritize Revegetation in Yerevan
Authors: Linara Salikhova, Elmira Nizamova, Aleksandra Katasonova, Gleb Vitkov, Olga Sarapulova.
Abstract:
While most European cities conduct research on heat-related risks, there is a research gap in the Caucasus region, particularly in Yerevan, Armenia. This study aims to test the method of establishing a correlation between urban heat islands (UHI) and urban greenery distribution for prioritization of heat-vulnerable areas for revegetation. Armenia has failed to consider measures to mitigate UHI in urban development strategies despite a 2.1°C increase in average annual temperature over the past 32 years. However, planting vegetation in the city is commonly used to deal with air pollution and can be effective in reducing UHI if it prioritizes heat-vulnerable areas. The research focuses on establishing such priorities while considering the distribution of urban greenery across the city. The lack of spatially explicit air temperature data necessitated the use of satellite images to achieve the following objectives: (1) identification of land surface temperatures (LST) and quantification of temperature variations across districts; (2) classification of massifs of land surface types using normalized difference vegetation index (NDVI); (3) correlation of land surface classes with LST. Examination of the heat-vulnerable city areas (in this study, the proportion of individuals aged 75 years and above) is based on demographic data (Census 2011). Based on satellite images (Sentinel-2) captured on June 5, 2021, NDVI calculations were conducted. The massifs of the land surface were divided into five surface classes. Due to capacity limitations, the average LST for each district was identified using one satellite image from Landsat-8 on August 15, 2021. In this research, local relief is not considered, as the study mainly focuses on the interconnection between temperatures and green massifs. The average temperature in the city is 3.8°C higher than in the surrounding non-urban areas. The temperature excess ranges from a low in Norq Marash to a high in Nubarashen. Norq Marash and Avan have the highest tree and grass coverage proportions, with 56.2% and 54.5%, respectively. In other districts, the balance of wastelands and buildings is three times higher than the grass and trees, ranging from 49.8% in Quanaqer-Zeytun to 76.6% in Nubarashen. Studies have shown that decreased tree and grass coverage within a district correlates with a higher temperature increase. The temperature excess is highest in Erebuni, Ajapnyak, and Nubarashen districts. These districts have less than 25% of their area covered with grass and trees. On the other hand, Avan and Norq Marash districts have a lower temperature difference, as more than 50% of their areas are covered with trees and grass. According to the findings, a significant proportion of the elderly population (35%) aged 75 years and above reside in the Erebuni, Ajapnyak, and Shengavit neighborhoods, which are more susceptible to heat stress with an LST higher than in other city districts. The findings suggest that the method of comparing the distribution of green massifs and LST can contribute to the prioritization of heat-vulnerable city areas for revegetation. The method can become a rationale for the formation of an urban greening program.Keywords: heat-vulnerability, land surface temperature, urban greenery, urban heat island, vegetation
Procedia PDF Downloads 70106 Nano-Enabling Technical Carbon Fabrics to Achieve Improved Through Thickness Electrical Conductivity in Carbon Fiber Reinforced Composites
Authors: Angelos Evangelou, Katerina Loizou, Loukas Koutsokeras, Orestes Marangos, Giorgos Constantinides, Stylianos Yiatros, Katerina Sofocleous, Vasileios Drakonakis
Abstract:
Owing to their outstanding strength to weight properties, carbon fiber reinforced polymer (CFRPs) composites have attracted significant attention finding use in various fields (sports, automotive, transportation, etc.). The current momentum indicates that there is an increasing demand for their employment in high value bespoke applications such as avionics and electronic casings, damage sensing structures, EMI (electromagnetic interference) structures that dictate the use of materials with increased electrical conductivity both in-plane and through the thickness. Several efforts by research groups have focused on enhancing the through-thickness electrical conductivity of FRPs, in an attempt to combine the intrinsically high relative strengths exhibited with improved z-axis electrical response as well. However, only a limited number of studies deal with printing of nano-enhanced polymer inks to produce a pattern on dry fabric level that could be used to fabricate CFRPs with improved through thickness electrical conductivity. The present study investigates the employment of screen-printing process on technical dry fabrics using nano-reinforced polymer-based inks to achieve the required through thickness conductivity, opening new pathways for the application of fiber reinforced composites in niche products. Commercially available inks and in-house prepared inks reinforced with electrically conductive nanoparticles are employed, printed in different patterns. The aim of the present study is to investigate both the effect of the nanoparticle concentration as well as the droplet patterns (diameter, inter-droplet distance and coverage) to optimize printing for the desired level of conductivity enhancement in the lamina level. The electrical conductivity is measured initially at ink level to pinpoint the optimum concentrations to be employed using a “four-probe” configuration. Upon printing of the different patterns, the coverage of the dry fabric area is assessed along with the permeability of the resulting dry fabrics, in alignment with the fabrication of CFRPs that requires adequate wetting by the epoxy matrix. Results demonstrated increased electrical conductivities of the printed droplets, with increase of the conductivity from the benchmark value of 0.1 S/M to between 8 and 10 S/m. Printability of dense and dispersed patterns has exhibited promising results in terms of increasing the z-axis conductivity without inhibiting the penetration of the epoxy matrix at the processing stage of fiber reinforced composites. The high value and niche prospect of the resulting applications that can stem from CFRPs with increased through thickness electrical conductivities highlights the potential of the presented endeavor, signifying screen printing as the process to to nano-enable z-axis electrical conductivity in composite laminas. This work was co-funded by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation (Project: ENTERPRISES/0618/0013).Keywords: CFRPs, conductivity, nano-reinforcement, screen-printing
Procedia PDF Downloads 151105 Optimization of Metal Pile Foundations for Solar Power Stations Using Cone Penetration Test Data
Authors: Adrian Priceputu, Elena Mihaela Stan
Abstract:
Our research addresses a critical challenge in renewable energy: improving efficiency and reducing the costs associated with the installation of ground-mounted photovoltaic (PV) panels. The most commonly used foundation solution is metal piles - with various sections adapted to soil conditions and the structural model of the panels. However, direct foundation systems are also sometimes used, especially in brownfield sites. Although metal micropiles are generally the first design option, understanding and predicting their bearing capacity, particularly under varied soil conditions, remains an open research topic. CPT Method and Current Challenges: Metal piles are favored for PV panel foundations due to their adaptability, but existing design methods rely heavily on costly and time-consuming in situ tests. The Cone Penetration Test (CPT) offers a more efficient alternative by providing valuable data on soil strength, stratification, and other key characteristics with reduced resources. During the test, a cone-shaped probe is pushed into the ground at a constant rate. Sensors within the probe measure the resistance of the soil to penetration, divided into cone penetration resistance and shaft friction resistance. Despite some existing CPT-based design approaches for metal piles, these methods are often cumbersome and difficult to apply. They vary significantly due to soil type and foundation method, and traditional approaches like the LCPC method involve complex calculations and extensive empirical data. The method was developed by testing 197 piles on a wide range of ground conditions, but the tested piles were very different from the ones used for PV pile foundations, making the method less accurate and practical for steel micropiles. Project Objectives and Methodology: Our research aims to develop a calculation method for metal micropile foundations using CPT data, simplifying the complex relationships involved. The goal is to estimate the pullout bearing capacity of piles without additional laboratory tests, streamlining the design process. To achieve this, a case study was selected which will serve for the development of an 80ha solar power station. Four testing locations were chosen spread throughout the site. At each location, two types of steel profiles (H160 and C100) were embedded into the ground at various depths (1.5m and 2.0m). The piles were tested for pullout capacity under natural and inundated soil conditions. CPT tests conducted nearby served as calibration points. The results served for the development of a preliminary equation for estimating pullout capacity. Future Work: The next phase involves validating and refining the proposed equation on additional sites by comparing CPT-based forecasts with in situ pullout tests. This validation will enhance the accuracy and reliability of the method, potentially transforming the foundation design process for PV panels.Keywords: cone penetration test, foundation optimization, solar power stations, steel pile foundations
Procedia PDF Downloads 53104 Promotion of Healthy Food Choices in School Children through Nutrition Education
Authors: Vinti Davar
Abstract:
Introduction: Childhood overweight increases the risk for certain medical and psychological conditions. Millions of school-age children worldwide are affected by serious yet easily treatable and preventable illnesses that inhibit their ability to learn. Healthier children stay in school longer, attend more regularly, learn more and become healthier and more productive adults. Schools are an important setting for nutrition education because one can reach most children, teachers and parents. These years offer a key window for shaping their lifetime habits, which have an impact on their health throughout life. Against this background, an attempt was made to impart nutrition education to school children in Haryana state of India to promote healthy food choices and assess the effectiveness of this program. Methodology: This study was completed in two phases. During the first phase, pre-intervention anthropometric and dietary survey was conducted; the teaching materials for nutrition intervention program were developed and tested; and the questionnaire was validated. In the second phase, an intervention was implemented in two schools of Kurukshetra, Haryana for six months by personal visits once a week. A total of 350 children in the age group of 6-12 years were selected. Out of these, 279 children, 153 boys and 126 girls completed the study. The subjects were divided into four groups namely: underweight, normal, overweight and obese based on body mass index-for-age categories. A power point colorful presentation to improve the quality of tiffin, snacks and meals emphasizing inclusion of all food groups especially vegetables every day and fruits at least 3-4 days per week was used. An extra 20 minutes of aerobic exercise daily was likewise organized and a healthy school environment created. Provision of clean drinking water by school authorities was ensured. Selling of soft drinks and energy-dense snacks in the school canteen as well as advertisements about soft drink and snacks on the school walls were banned. Post intervention, anthropometric indices and food selections were reassessed. Results: The results of this study reiterate the critical role of nutrition education and promotion in improving the healthier food choices by school children. It was observed that normal, overweight and obese children participating in nutrition education intervention program significantly (p≤0.05) increased their daily seasonal fruit and vegetable consumption. Fat and oil consumption was significantly reduced by overweight and obese subjects. Fast food intake was controlled by obese children. The nutrition knowledge of school children significantly improved (p≤0.05) from pre to post intervention. A highly significant increase (p≤0.00) was noted in the nutrition attitude score after intervention in all four groups. Conclusion: This study has shown that a well-planned nutrition education program could improve nutrition knowledge and promote positive changes in healthy food choices. A nutrition program inculcates wholesome eating and active life style habits in children and adolescents that could not only prevent them from chronic diseases and early death but also reduce healthcare cost and enhance the quality of life of citizens and thereby nations.Keywords: children, eating habits healthy food, obesity, school going, fast foods
Procedia PDF Downloads 203103 Turkish Airlines' 85th Anniversary Commercial: An Analysis of the Institutional Identity of a Brand in Terms of Glocalization
Authors: Samil Ozcan
Abstract:
Airlines companies target different customer segments in consideration of pricing, service quality, flight network, etc. and their brand positioning accords with the marketization strategies developed in the same direction. The object of this study, Turkish Airlines, has many peculiarities regarding its brand positioning as compared to its rivals in the sector. In the first place, it appeals to a global customer group because of its Star Alliance membership and its broad flight network with 315 destination points. The second group in its customer segmentation includes domestic customers. For this group, the company follows a marketing strategy that plays to local culture and accentuates the image of Turkishness as an emotional allurement. The advertisements and publicity projects designed in this regard put little emphasis on the service quality the company offers to its clients; it addresses the emotions of the consumers rather than individual benefits and relies on the historical memory of the nation and shared cultural values. This study examines the publicity work which aims at the second segment customer group focusing on Turkish Airlines’ 85th Anniversary Commercial through a symbolic meaning analysis approach. The commercial presents six stories with undertones of nationalism in its theme. Nationalism is not just the product of collective interests based on reason but a result of patriotism in the sense of loyalty to state and nation and love of ethnic belonging. While nationalism refers to concrete notions such as blood tie, common ancestor, shared history, it is not the actuality of these notions that it draws its real strength but the emotions invested in them. The myths of origin, the idea of common homeland, boundary definitions, and symbolic acculturation have instrumental importance in the development of these commonalities. The commercial offers concrete examples for an analysis of Connor’s definition of nationalism based on emotions. Turning points in the history of the Turkish Republic and the historical mission Turkish Airlines undertook in these moments are narrated in six stories in the commercial with a highly emotional theme. These emotions, in general, depend on collective memory generated by national consciousness. Collective memory is not simply remembering the past. It is constructed through the reconstruction and reinterpretation of the past in the present moment. This study inquires the motivations behind the nationalist emotions generated within the collective memory by engaging with the commercial released for the 85th anniversary of Turkish Airlines as the object of analysis. Symbols and myths can be read as key concepts that reveal the relation between 'identity and memory'. Because myths and symbols do not merely reflect on collective memory, they reconstruct it as well. In this sense, the theme of the commercial defines the image of Turkishness with virtues such as self-sacrifice, helpfulness, humanity, and courage through a process of meaning creation based on symbolic mythologizations like flag and homeland. These virtues go beyond describing the image of Turkishness and become an instrument that defines and gives meaning to Turkish identity.Keywords: collective memory, emotions, identity, nationalism
Procedia PDF Downloads 152102 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics
Authors: Varun Kumar, Chandra Shakher
Abstract:
Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy
Procedia PDF Downloads 497101 Fully Autonomous Vertical Farm to Increase Crop Production
Authors: Simone Cinquemani, Lorenzo Mantovani, Aleksander Dabek
Abstract:
New technologies in agriculture are opening new challenges and new opportunities. Among these, certainly, robotics, vision, and artificial intelligence are the ones that will make a significant leap, compared to traditional agricultural techniques, possible. In particular, the indoor farming sector will be the one that will benefit the most from these solutions. Vertical farming is a new field of research where mechanical engineering can bring knowledge and know-how to transform a highly labor-based business into a fully autonomous system. The aim of the research is to develop a multi-purpose, modular, and perfectly integrated platform for crop production in indoor vertical farming. Activities will be based both on hardware development such as automatic tools to perform different activities on soil and plants, as well as research to introduce an extensive use of monitoring techniques based on machine learning algorithms. This paper presents the preliminary results of a research project of a vertical farm living lab designed to (i) develop and test vertical farming cultivation practices, (ii) introduce a very high degree of mechanization and automation that makes all processes replicable, fully measurable, standardized and automated, (iii) develop a coordinated control and management environment for autonomous multiplatform or tele-operated robots in environments with the aim of carrying out complex tasks in the presence of environmental and cultivation constraints, (iv) integrate AI-based algorithms as decision support system to improve quality production. The coordinated management of multiplatform systems still presents innumerable challenges that require a strongly multidisciplinary approach right from the design, development, and implementation phases. The methodology is based on (i) the development of models capable of describing the dynamics of the various platforms and their interactions, (ii) the integrated design of mechatronic systems able to respond to the needs of the context and to exploit the strength characteristics highlighted by the models, (iii) implementation and experimental tests performed to test the real effectiveness of the systems created, evaluate any weaknesses so as to proceed with a targeted development. To these aims, a fully automated laboratory for growing plants in vertical farming has been developed and tested. The living lab makes extensive use of sensors to determine the overall state of the structure, crops, and systems used. The possibility of having specific measurements for each element involved in the cultivation process makes it possible to evaluate the effects of each variable of interest and allows for the creation of a robust model of the system as a whole. The automation of the laboratory is completed with the use of robots to carry out all the necessary operations, from sowing to handling to harvesting. These systems work synergistically thanks to the knowledge of detailed models developed based on the information collected, which allows for deepening the knowledge of these types of crops and guarantees the possibility of tracing every action performed on each single plant. To this end, artificial intelligence algorithms have been developed to allow synergistic operation of all systems.Keywords: automation, vertical farming, robot, artificial intelligence, vision, control
Procedia PDF Downloads 38100 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage
Authors: Andrew Laming, John Hattie, Mark Wilson
Abstract:
Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean
Procedia PDF Downloads 6799 Effects and Mechanisms of an Online Short-Term Audio-Based Mindfulness Intervention on Wellbeing in Community Settings and How Stress and Negative Affect Influence the Therapy Effects: Parallel Process Latent Growth Curve Modeling of a Randomized Control
Authors: Man Ying Kang, Joshua Kin Man Nan
Abstract:
The prolonged pandemic has posed alarming public health challenges to various parts of the world, and face-to-face mental health treatment is largely discounted for the control of virus transmission, online psychological services and self-help mental health kits have become essential. Online self-help mindfulness-based interventions have proved their effects on fostering mental health for different populations over the globe. This paper was to test the effectiveness of an online short-term audio-based mindfulness (SAM) program in enhancing wellbeing, dispositional mindfulness, and reducing stress and negative affect in community settings in China, and to explore possible mechanisms of how dispositional mindfulness, stress, and negative affect influenced the intervention effects on wellbeing. Community-dwelling adults were recruited via online social networking sites (e.g., QQ, WeChat, and Weibo). Participants (n=100) were randomized into the mindfulness group (n=50) and a waitlist control group (n=50). In the mindfulness group, participants were advised to spend 10–20 minutes listening to the audio content, including mindful-form practices (e.g., eating, sitting, walking, or breathing). Then practice daily mindfulness exercises for 3 weeks (a total of 21 sessions), whereas those in the control group received the same intervention after data collection in the mindfulness group. Participants in the mindfulness group needed to fill in the World Health Organization Five Well-Being Index (WHO), Positive and Negative Affect Schedule (PANAS), Perceived Stress Scale (PSS), and Freiburg Mindfulness Inventory (FMI) four times: at baseline (T0) and at 1 (T1), 2 (T2), and 3 (T3) weeks while those in the waitlist control group only needed to fill in the same scales at pre- and post-interventions. Repeated-measure analysis of variance, paired sample t-test, and independent sample t-test was used to analyze the variable outcomes of the two groups. The parallel process latent growth curve modeling analysis was used to explore the longitudinal moderated mediation effects. The dependent variable was WHO slope from T0 to T3, the independent variable was Group (1=SAM, 2=Control), the mediator was FMI slope from T0 to T3, and the moderator was T0NA and T0PSS separately. The different levels of moderator effects on WHO slope was explored, including low T0NA or T0PSS (Mean-SD), medium T0NA or T0PSS (Mean), and high T0NA or T0PSS (Mean+SD). The results found that SAM significantly improved and predicted higher levels of WHO slope and FMI slope, as well as significantly reduced NA and PSS. FMI slope positively predict WHO slope. FMI slope partially mediated the relationship between SAM and WHO slope. Baseline NA and PSS as the moderators were found to be significant between SAM and WHO slope and between SAM and FMI slope, respectively. The conclusion was that SAM was effective in promoting levels of mental wellbeing, positive affect, and dispositional mindfulness as well as reducing negative affect and stress in community settings in China. SAM improved wellbeing faster through the faster enhancement of dispositional mindfulness. Participants with medium-to-high negative affect and stress buffered the therapy effects of SAM on wellbeing improvement speed.Keywords: mindfulness, negative affect, stress, wellbeing, randomized control trial
Procedia PDF Downloads 10898 Chemical, Biochemical and Sensory Evaluation of a Quadrimix Complementary Food Developed from Sorghum, Groundnut, Crayfish and Pawpaw Blends
Authors: Ogechi Nzeagwu, Assumpta Osuagwu, Charlse Nkwoala
Abstract:
Malnutrition in infants due to poverty, poor feeding practices, and high cost of commercial complementary foods among others is a concern in developing countries. The study evaluated the proximate, vitamin and mineral compositions, antinutrients and functional properties, biochemical, haematological and sensory evaluation of complementary food made from sorghum, groundnut, crayfish and paw-paw flour blends using standard procedures. The blends were formulated on protein requirement of infants (18 g/day) using Nutrisurvey linear programming software in ratio of sorghum(S), groundnut(G), crayfish(C) and pawpaw(P) flours as 50:25:10:15(SGCP1), 60:20:10:10 (SGCP2), 60:15:15:10 (SGCP3) and 60:10:20:10 (SGCP4). Plain-pap (fermented maize flour)(TCF) and cerelac (commercial complementary food) served as basal and control diets. Thirty weanling male albino rats aged 28-35 days weighing 33-60 g were purchased and used for the study. The rats after acclimatization were fed with gruel produced with the experimental diets and the control with water ad libitum daily for 35days. Effect of the blends on lipid profile, blood glucose, haematological (RBC, HB, PCV, MCV), liver and kidney function and weight gain of the rats were assessed. Acceptability of the gruel was conducted at the end of rat feeding on forty mothers of infants’ ≥ 6 months who gave their informed consent to participate using a 9 point hedonic scale. Data was analyzed for means and standard deviation, analysis of variance and means were separated using Duncan multiple range test and significance judged at 0.05, all using SPSS version 22.0. The results indicated that crude protein, fibre, ash and carbohydrate of the formulated diets were either comparable or higher than values in cerelac. The formulated diets (SGCP1- SGCP4) were significantly (P>0.05) higher in vitamin A and thiamin compared to cerelac. The iron content of the formulated diets SGCP1- SGCP4 (4.23-6.36 mg/100) were within the recommended iron intake of infants (0.55 mg/day). Phytate (1.56-2.55 mg/100g) and oxalate (0.23-0.35 mg/100g) contents of the formulated diets were within the permissible limits of 0-5%. In functional properties, bulk density, swelling index, % dispersibility and water absorption capacity significantly (P<0.05) increased and compared favourably with cerelac. The essential amino acids of the formulated blends were within the amino acid profile of the FAO/WHO/UNU reference protein for children 0.5 -2 years of age. Urea concentration of rats fed with SGCP1-SGCP4 (19.48 mmol/L),(23.76 mmol/L),(24.07 mmol/L),(23.65 mmol/L) respectively was significantly higher than that of rat fed cerelac (16.98 mmol/L); however, plain pap had the least value (9.15 mmol/L). Rats fed with SGCP1-SGCP4 (116 mg/dl), (119 mg/dl), (115 mg/dl), (117 mg/dl) respectively had significantly higher glucose levels those fed with cerelac (108 mg/dl). Liver function parameters (AST, ALP and ALT), lipid profile (triglyceride, HDL, LDL, VLDL) and hematological parameters of rats fed with formulated diets were within normal range. Rats fed SGCP1 gained more weight (90.45 g) than other rats fed with SGCP2-SGCP4 (71.65 g, 79.76 g, 75.68 g), TCF (20.13 g) and cerelac (59.06 g). In all the sensory attributes, the control was preferred with respect to the formulated diets. The formulated diets were generally adequate and may likely have potentials to meet nutrient requirements of infants as complementary food.Keywords: biochemical, chemical evaluation, complementary food, quadrimix
Procedia PDF Downloads 16697 Qualitative Research on German Household Practices to Ease the Risk of Poverty
Authors: Marie Boost
Abstract:
Despite activation policies, forced personal initiative to step out of unemployment and a general prosper economic situation, poverty and financial hardship constitute a crucial role in the daily lives of many families in Germany. In 2015, ~16 million persons (20.2%) of the German population are at risk of poverty or social exclusion. This is illustrated by an unemployment rate of 13.3% in the research area, located in East Germany. Despite this high amount of persons living in vulnerable households, we know little about how they manage to stabilize their lives or even overcome poverty – apart from solely relying on welfare state benefits or entering in a stable, well-paid job. Most of them are struggling in precarious living circumstances, switching from one or several short-term, low-paid jobs into self-employment or unemployment, sometimes accompanied by welfare state benefits. Hence, insecurity and uncertain future expectation form a crucial part of their lives. Within the EU-funded project “RESCuE”, resilient practices of vulnerable households were investigated in nine European countries. Approximately, 15 expert interviews with policy makers, representatives from welfare state agencies, NGOs and charity organizations and 25 household interviews have been conducted within each country. It aims to find out more about the chances and conditions of social resilience. The research is based on the triangulation of biographical narrative interviews, followed by participatory photo interviews, asking the household members to portray their typical everyday life. The presentation is focusing on the explanatory strength of this mixed-methods approach in order to show the potential of household practices to overcome financial hardship. The methodological combination allows an in-depth analysis of the families and households everyday living circumstances, including their poverty and employment situation, whether formal and informal. Active household budgeting practices, such as saving and consumption practices are based on subsistence or Do-It-Yourself work. Especially due to the photo-interviews, the importance of inherent cultural and tacit knowledge becomes obvious as it pictures their typical practices, like cultivation and gathering fruits and vegetables or going fishing. One of the central findings is the multiple purposes of these practices. They contribute to ease financial burden through consumption reduction and strengthen social ties, as they are mostly conducted with close friends or family members. In general, non-commodified practices are found to be re-commodified and to contribute to ease financial hardship, e.g. by the use of commons, barter trade or simple mutual exchange (gift exchange). These practices can substitute external purchases and reduce expenses or even generate a small income. Mixing different income sources are found to be the most likely way out of poverty within the context of a precarious labor market. But these resilient household practices take its toll as they are highly preconditioned, and many persons put themselves into risk of overstressing themselves. Thus, the potentials and risks of resilient household practices are reflected in the presentation.Keywords: consumption practices, labor market, qualitative research, resilience
Procedia PDF Downloads 21996 Quantifying Impairments in Whiplash-Associated Disorders and Association with Patient-Reported Outcomes
Authors: Harpa Ragnarsdóttir, Magnús Kjartan Gíslason, Kristín Briem, Guðný Lilja Oddsdóttir
Abstract:
Introduction: Whiplash-Associated Disorder (WAD) is a health problem characterized by motor, neurological and psychosocial symptoms, stressing the need for a multimodal treatment approach. To achieve individualized multimodal approach, prognostic factors need to be identified early using validated patient-reported and objective outcome measures. The aim of this study is to demonstrate the degree of association between patient-reported and clinical outcome measures of WAD patients in the subacute phase. Methods: Individuals (n=41) with subacute (≥1, ≤3 months) WAD (I-II), medium to high-risk symptoms, or neck pain rating ≥ 4/10 on the Visual Analog Scale (VAS) were examined. Outcome measures included measurements for movement control (Butterfly test) and cervical active range of motion (cAROM) using the NeckSmart system, a computer system using an inertial measurement unit (IMU) that connects to a computer. The IMU sensor is placed on the participant’s head, who receives visual feedback about the movement of the head. Patient-reported neck disability, pain intensity, general health, self-perceived handicap, central sensitization, and difficulties due to dizziness were measured using questionnaires. Excel and R statistical software were used for statistical analyses. Results: Forty-one participants, 15 males (37%), 26 females (63%), mean (SD) age 36.8 (±12.7), underwent data collection. Mean amplitude accuracy (AA) (SD) in the Butterfly test for easy, medium, and difficult paths were 2.4mm (0.9), 4.4mm (1.8), and 6.8mm (2.7), respectively. Mean cAROM (SD) for flexion, extension, left-, and right rotation were 46.3° (18.5), 48.8° (17.8), 58.2° (14.3), and 58.9° (15.0), respectively. Mean scores on the Neck Disability Index (NDI), VAS, Dizziness Handicap Inventory (DHI), Central Sensitization Inventory (CSI), and 36-Item Short Form Survey RAND version (RAND) were 43% (17.4), 7 (1.7), 37 (25.4), 51 (17.5), and 39.2 (17.7) respectively. Females showed significantly greater deviation for AA compared to males for easy and medium Butterfly paths (p<0.05). Statistically significant moderate to strong positive correlation was found between the DHI and easy (r=0.6, p=0.05), medium (r=0.5, p=0.05)) and difficult (r=0.5, p<0.05) Butterfly paths, between the total RAND score and all cAROMs (r between 0.4-0.7, p≤0.05) except flexion (r=0.4, p=0.7), and between the NDI score and CSI (r=0.7, p<0.01), VAS (r=0.5, p<0.01), and DHI (r=0.7, p<0.01) scores respectively. Discussion: All patient-reported and objective measures were found to be outside the reference range. Results suggest females have worse movement control in the neck in the subacute WAD phase. However, no statistical difference based on gender was found in patient-reported measures. Suggesting females might have worse movement control than males in general in this phase. The correlation found between DHI and the Butterfly test can be explained because the DHI measures proprioceptive symptoms like dizziness and eye movement disorders that can affect the outcome of movement control tests. A correlation was found between the total RAND score and cAROM, suggesting that a reduced range of motion affects the quality of life. Significance: The NeckSmart system can detect abnormalities in cAROM, fine movement control, and kinesthesia of the neck. Results suggest females have worse movement control than males. Results show a moderate to a high correlation between several patient-reported and objective measurements.Keywords: whiplash associated disorders, car-collision, neck, trauma, subacute
Procedia PDF Downloads 6995 Virulence Factors and Drug Resistance of Enterococci Species Isolated from the Intensive Care Units of Assiut University Hospitals, Egypt
Authors: Nahla Elsherbiny, Ahmed Ahmed, Hamada Mohammed, Mohamed Ali
Abstract:
Background: The enterococci may be considered as opportunistic agents particularly in immunocompromised patients. It is one of the top three pathogens causing many healthcare associated infections (HAIs). Resistance to several commonly used antimicrobial agents is a remarkable characteristic of most species which may carry various genes contributing to virulence. Objectives: to determine the prevalence of enterococci species in different intensive care units (ICUs) causing health care-associated infections (HAIs), intestinal carriage and environmental contamination. Also, to study the antimicrobial susceptibility pattern of the isolates with special reference to vancomycin resistance. In addition to phenotypic and genotypic detection of gelatinase, cytolysin and biofilm formation among isolates. Patients and Methods: This study was carried out in the infection control laboratory at Assiut University Hospitals over a period of one year. Clinical samples were collected from 285 patients with various (HAIs) acquired after admission to different ICUs. Rectal swabs were taken from 14 cases for detection of enterococci carriage. In addition, 1377 environmental samples were collected from the surroundings of the patients. Identification was done by conventional bacteriological methods and confirmed by analytical profile index (API). Antimicrobial sensitivity testing was performed by Kirby Bauer disc diffusion method and detection of vancomycin resistance was done by agar screen method. For the isolates, phenotypic detection of cytolysin, gelatinase production and detection of biofilm by tube method, Congo red method and microtiter plate. We performed polymerase chain reaction (PCR) for detection of some virulence genes (gelE, cylA, vanA, vanB and esp). Results: Enterococci caused 10.5% of the HAIs. Respiratory tract infection was the predominant type (86.7%). The commonest species were E.gallinarum (36.7%), E.casseliflavus (30%), E.faecalis (30%), and E.durans (3.4 %). Vancomycin resistance was detected in a total of 40% (12/30) of those isolates. The risk factors associated with acquiring vancomycin resistant enterococci (VRE) were immune suppression (P= 0.031) and artificial feeding (P= 0.008). For the rectal swabs, enterococci species were detected in 71.4% of samples with the predominance of E. casseliflavus (50%). Most of the isolates were vancomycin resistant (70%). Out of a total 1377 environmental samples, 577 (42%) samples were contaminated with different microorganisms. Enterococci were detected in 1.7% (10/577) of total contaminated samples, 50% of which were vancomycin resistant. All isolates were resistant to penicillin, ampicillin, oxacillin, ciprofloxacin, amikacin, erythromycin, clindamycin and trimethoprim-sulfamethaxazole. For the remaining antibiotics, variable percentages of resistance were reported. Cytolysin and gelatinase were detected phenotypically in 16% and 48 % of the isolates respectively. The microtiter plate method showed the highest percentages of detection of biofilm among all isolated species (100%). The studied virulence genes gelE, esp, vanA and vanB were detected in 62%, 12%, 2% and 12% respectively, while cylA gene was not detected in any isolates. Conclusions: A significant percentage of enterococci was isolated from patients and environments in the ICUs. Many virulence factors were detected phenotypically and genotypically among isolates. The high percentage of resistance, coupled with the risk of cross transmission to other patients make enterococci infections a significant infection control issue in hospitals.Keywords: antimicrobial resistance, enterococci, ICUs, virulence factors
Procedia PDF Downloads 28394 Environmentally Sustainable Transparent Wood: A Fully Green Approach from Bleaching to Impregnation for Energy-Efficient Engineered Wood Components
Authors: Francesca Gullo, Paola Palmero, Massimo Messori
Abstract:
Transparent wood is considered a promising structural material for the development of environmentally friendly, energy-efficient engineered components. To obtain transparent wood from natural wood materials two approaches can be used: i) bottom-up and ii) top-down. Through the second method, the color of natural wood samples is lightened through a chemical bleaching process that acts on chromophore groups of lignin, such as the benzene ring, quinonoid, vinyl, phenolics, and carbonyl groups. These chromophoric units form complex conjugate systems responsible for the brown color of wood. There are two strategies to remove color and increase the whiteness of wood: i) lignin removal and ii) lignin bleaching. In the lignin removal strategy, strong chemicals containing chlorine (chlorine, hypochlorite, and chlorine dioxide) and oxidizers (oxygen, ozone, and peroxide) are used to completely destroy and dissolve the lignin. In lignin bleaching methods, a moderate reductive (hydrosulfite) or oxidative (hydrogen peroxide) is commonly used to alter or remove the groups and chromophore systems of lignin, selectively discoloring the lignin while keeping the macrostructure intact. It is, therefore, essential to manipulate nanostructured wood by precisely controlling the nanopores in the cell walls by monitoring both chemical treatments and process conditions, for instance, the treatment time, the concentration of chemical solutions, the pH value, and the temperature. The elimination of wood light scattering is the second step in the fabrication of transparent wood materials, which can be achieved through two-step approaches: i) the polymer impregnation method and ii) the densification method. For the polymer impregnation method, the wood scaffold is treated with polymers having a corresponding refractive index (e.g., PMMA and epoxy resins) under vacuum to obtain the transparent composite material, which can finally be pressed to align the cellulose fibers and reduce interfacial defects in order to have a finished product with high transmittance (>90%) and excellent light-guiding. However, both the solution-based bleaching and the impregnation processes used to produce transparent wood generally consume large amounts of energy and chemicals, including some toxic or pollutant agents, and are difficult to scale up industrially. Here, we report a method to produce optically transparent wood by modifying the lignin structure with a chemical reaction at room temperature using small amounts of hydrogen peroxide in an alkaline environment. This method preserves the lignin, which results only deconjugated and acts as a binder, providing both a strong wood scaffold and suitable porosity for infiltration of biobased polymers while reducing chemical consumption, the toxicity of the reagents used, polluting waste, petroleum by-products, energy and processing time. The resulting transparent wood demonstrates high transmittance and low thermal conductivity. Through the combination of process efficiency and scalability, the obtained materials are promising candidates for application in the field of construction for modern energy-efficient buildings.Keywords: bleached wood, energy-efficient components, hydrogen peroxide, transparent wood, wood composites
Procedia PDF Downloads 5293 Subway Ridership Estimation at a Station-Level: Focus on the Impact of Bus Demand, Commercial Business Characteristics and Network Topology
Authors: Jungyeol Hong, Dongjoo Park
Abstract:
The primary purpose of this study is to develop a methodological framework to predict daily subway ridership at a station-level and to examine the association between subway ridership and bus demand incorporating commercial business facility in the vicinity of each subway station. The socio-economic characteristics, land-use, and built environment as factors may have an impact on subway ridership. However, it should be considered not only the endogenous relationship between bus and subway demand but also the characteristics of commercial business within a subway station’s sphere of influence, and integrated transit network topology. Regarding a statistical approach to estimate subway ridership at a station level, therefore it should be considered endogeneity and heteroscedastic issues which might have in the subway ridership prediction model. This study focused on both discovering the impacts of bus demand, commercial business characteristics, and network topology on subway ridership and developing more precise subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers entire Seoul city in South Korea and includes 243 stations with the temporal scope set at twenty-four hours with one-hour interval time panels each. The data for subway and bus ridership was collected Seoul Smart Card data from 2015 and 2016. Three-Stage Least Square(3SLS) approach was applied to develop daily subway ridership model as capturing the endogeneity and heteroscedasticity between bus and subway demand. Independent variables incorporating in the modeling process were commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. As a result, it was found that bus ridership and subway ridership were endogenous each other and they had a significantly positive sign of coefficients which means one transit mode could increase another transportation mode’s ridership. In other words, two transit modes of subway and bus have a mutual relationship instead of the competitive relationship. The commercial business characteristics are the most critical dimension among the independent variables. The variables of commercial business facility rate in the paper containing six types; medical, educational, recreational, financial, food service, and shopping. From the model result, a higher rate in medical, financial buildings, shopping, and food service facility lead to increment of subway ridership at a station, while recreational and educational facility shows lower subway ridership. The complex network theory was applied for estimating integrated network topology measures that cover the entire Seoul transit network system, and a framework for seeking an impact on subway ridership. The centrality measures were found to be significant and showed a positive sign indicating higher centrality led to more subway ridership at a station level. The results of model accuracy tests by out of samples provided that 3SLS model has less mean square error rather than OLS and showed the methodological approach for the 3SLS model was plausible to estimate more accurate subway ridership. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (2017R1C1B2010175).Keywords: subway ridership, bus ridership, commercial business characteristic, endogeneity, network topology
Procedia PDF Downloads 14492 Genetically Engineered Crops: Solution for Biotic and Abiotic Stresses in Crop Production
Authors: Deepak Loura
Abstract:
Production and productivity of several crops in the country continue to be adversely affected by biotic (e.g., Insect-pests and diseases) and abiotic (e.g., water temperature and salinity) stresses. Over-dependence on pesticides and other chemicals is economically non-viable for the resource-poor farmers of our country. Further, pesticides can potentially affect human and environmental safety. While traditional breeding techniques and proper- management strategies continue to play a vital role in crop improvement, we need to judiciously use biotechnology approaches for the development of genetically modified crops addressing critical problems in the improvement of crop plants for sustainable agriculture. Modern biotechnology can help to increase crop production, reduce farming costs, and improve food quality and the safety of the environment. Genetic engineering is a new technology which allows plant breeders to produce plants with new gene combinations by genetic transformation of crop plants for improvement of agronomic traits. Advances in recombinant DNA technology have made it possible to have genes between widely divergent species to develop genetically modified or genetically engineered plants. Plant genetic engineering provides the strength to harness useful genes and alleles from indigenous microorganisms to enrich the gene pool for developing genetically modified (GM) crops that will have inbuilt (inherent) resistance to insect pests, diseases, and abiotic stresses. Plant biotechnology has made significant contributions in the past 20 years in the development of genetically engineered or genetically modified crops with multiple benefits. A variety of traits have been introduced in genetically engineered crops which include (i) herbicide resistance. (ii) pest resistance, (iii) viral resistance, (iv) slow ripening of fruits and vegetables, (v) fungal and bacterial resistance, (vi) abiotic stress tolerance (drought, salinity, temperature, flooding, etc.). (vii) quality improvement (starch, protein, and oil), (viii) value addition (vitamins, micro, and macro elements), (ix) pharmaceutical and therapeutic proteins, and (x) edible vaccines, etc. Multiple genes in transgenic crops can be useful in developing durable disease resistance and a broad insect-control spectrum and could lead to potential cost-saving advantages for farmers. The development of transgenic to produce high-value pharmaceuticals and the edible vaccine is also under progress, which requires much more research and development work before commercially viable products will be available. In addition, molecular-aided selection (MAS) is now routinely used to enhance the speed and precision of plant breeding. Newer technologies need to be developed and deployed for enhancing and sustaining agricultural productivity. There is a need to optimize the use of biotechnology in conjunction with conventional technologies to achieve higher productivity with fewer resources. Therefore, genetic modification/ engineering of crop plants assumes greater importance, which demands the development and adoption of newer technology for the genetic improvement of crops for increasing crop productivity.Keywords: biotechnology, plant genetic engineering, genetically modified, biotic, abiotic, disease resistance
Procedia PDF Downloads 6991 Atmospheric Circulation Patterns Inducing Coastal Upwelling in the Baltic Sea
Authors: Ewa Bednorz, Marek Polrolniczak, Bartosz Czernecki, Arkadiusz Marek Tomczyk
Abstract:
This study is meant as a contribution to the research of the upwelling phenomenon, which is one of the most pronounced examples of the sea-atmosphere coupling. The aim is to confirm the atmospheric forcing of the sea waters circulation and sea surface temperature along the variously oriented Baltic Sea coasts and to find out macroscale and regional circulation patterns triggering upwelling along different sections of this relatively small and semi-closed sea basin. The mean daily sea surface temperature data from the summer seasons (June–August) of the years 1982–2017 made the basis for the detection of upwelling cases. For the atmospheric part of the analysis, monthly indices of the Northern Hemisphere macroscale circulation patterns were used. Besides, in order to identify the local direction of airflow, the daily zonal and meridional regional circulation indices were constructed and introduced to the analysis. Finally, daily regional circulation patterns over the Baltic Sea region were distinguished by applying the principal component analysis to the gridded mean daily sea level pressure data. Within the Baltic Sea, upwelling is the most frequent along the zonally oriented northern coast of the Gulf of Finland, southern coasts of Sweden, and along the middle part of the western Gulf of Bothnia coast. Among the macroscale circulation patterns, the Scandinavian type (SCAND), with a primary circulation center located over Scandinavia, has the strongest impact on the horizontal flow of surface sea waters in the Baltic Sea, which triggers upwelling. An anticyclone center over Scandinavia in the positive phase of SCAND enhances the eastern airflow, which increases upwelling frequency along southeastern Baltic coasts. It was proved in the study that the zonal circulation has a stronger impact on upwelling occurrence than the meridional one, and it could increase/decrease a chance of upwelling formation by more than 70% in some coastal sections. Positive and negative phases of six distinguished regional daily circulation patterns made 12 different synoptic situations which were analyzed in the terms of their influence on the upwelling formation. Each of them revealed some impact on the frequency of upwelling in some coastal section of the Baltic Sea; however, two kinds of synoptic situations seemed to have the strongest influence, namely, the first kind representing pressure patterns enhancing the zonal flow and the second kind representing synoptic patterns with a cyclone/anticyclone centers over southern Scandinavia. Upwelling occurrence appeared to be particularly strongly reliant on the atmospheric conditions in some specific coastal sections, namely: the Gulf of Finland, the south eastern Baltic coasts (Polish and Latvian-Lithuanian section), and the western part of the Gulf of Bothnia. Concluding, it can be stated that atmospheric conditions strongly control the occurrence of upwelling within the Baltic Sea basin. Both local and macroscale circulation patterns expressed by the location of the pressure centers influence the frequency of this phenomenon; however, the impact strength varies, depending on the coastal region. Acknowledgment: This research was funded by the National Science Centre, Poland, grant number 2016/21/B/ST10/01440.Keywords: Baltic Sea, circulation patterns, coastal upwelling, synoptic conditions
Procedia PDF Downloads 12690 Understanding Stock-Out of Pharmaceuticals in Timor-Leste: A Case Study in Identifying Factors Impacting on Pharmaceutical Quantification in Timor-Leste
Authors: Lourenco Camnahas, Eileen Willis, Greg Fisher, Jessie Gunson, Pascale Dettwiller, Charlene Thornton
Abstract:
Stock-out of pharmaceuticals is a common issue at all level of health services in Timor-Leste, a small post-conflict country. This lead to the research questions: what are the current methods used to quantify pharmaceutical supplies; what factors contribute to the on-going pharmaceutical stock-out? The study examined factors that influence the pharmaceutical supply chain system. Methodology: Privett and Goncalvez dependency model has been adopted for the design of the qualitative interviews. The model examines pharmaceutical supply chain management at three management levels: management of individual pharmaceutical items, health facilities, and health systems. The interviews were conducted in order to collect information on inventory management, logistics management information system (LMIS) and the provision of pharmaceuticals. Andersen' behavioural model for healthcare utilization also informed the interview schedule, specifically factors linked to environment (healthcare system and external environment) and the population (enabling factors). Forty health professionals (bureaucrats, clinicians) and six senior officers from a United Nations Agency, a global multilateral agency and a local non-governmental organization were interviewed on their perceptions of factors (healthcare system/supply chain and wider environment) impacting on stock out. Additionally, policy documents for the entire healthcare system, along with population data were collected. Findings: An analysis using Pozzebon’s critical interpretation identified a range of difficulties within the system from poor coordination to failure to adhere to policy guidelines along with major difficulties with inventory management, quantification, forecasting, and budgetary constraints. Weak logistics management information system, lack of capacity in inventory management, monitoring and supervision are additional organizational factors that also contributed to the issue. There were various methods of quantification of pharmaceuticals applied in the government sector, and non-governmental organizations. Lack of reliable data is one of the major problems in the pharmaceutical provision. Global Fund has the best quantification methods fed by consumption data and malaria cases. There are other issues that worsen stock-out: political intervention, work ethic and basic infrastructure such as unreliable internet connectivity. Major issues impacting on pharmaceutical quantification have been identified. However, current data collection identified limitations within the Andersen model; specifically, a failure to take account of predictors in the healthcare system and the environment (culture/politics/social. The next step is to (a) compare models used by three non-governmental agencies with the government model; (b) to run the Andersen explanatory model for pharmaceutical expenditure for 2 to 5 drug items used by these three development partners in order to see how it correlates with the present model in terms of quantification and forecasting the needs; (c) to repeat objectives (a) and (b) using the government model; (d) to draw a conclusion about the strength.Keywords: inventory management, pharmaceutical forecasting and quantification, pharmaceutical stock-out, pharmaceutical supply chain management
Procedia PDF Downloads 24289 Spatial Variation in Urbanization and Slum Development in India: Issues and Challenges in Urban Planning
Authors: Mala Mukherjee
Abstract:
Background: India is urbanizing very fast and urbanisation in India is treated as one of the most crucial components of economic growth. Though the pace of urbanisation (31.6 per cent in 2011) is however slower and lower than the average for Asia but the absolute number of people residing in cities and towns has increased substantially. Rapid urbanization leads to urban poverty and it is well represented in slums. Currently India has four metropolises and 53 million plus cities. All of them have significant slum population but the standard of living and success of slum development programmes varies across regions. Objectives: Objectives of the paper are to show how urbanisation and slum development varies across space; to show spatial variation in the standard of living in Indian slums; to analyse how the implementation of slum development policies like JNNURM, Rajiv Awas Yojana varies across cities and bring different results in different regions and what are the factors responsible for such variation. Data Sources and Methodology: Census 2011 data on urban population and slum households and amenities have been used for analysing the regional variation of urbanisation in 53 million plus cities of India. Special focus has been put on Kolkata Metropolitan Area. Statistical techniques like z-score and PCA have been employed to work out Standard of Living Deprivation score for all the slums of 53 metropolises. ARC-GIS software is used for making maps. Standard of living has been measured in terms of access to basic amenities, infrastructure and assets like drinking water, sanitation, housing condition, bank account, and so on. Findings: 1. The first finding reveals that migration and urbanization is very high in Greater Mumbai, Delhi, Bangaluru, Chennai, Hyderabad and Kolkata; but slum population is high in Greater Mumbai (50% population live in slums), Meerut, Faridabad, Ludhiana, Nagpur, Kolkata etc. Though the rate of urbanization is high in southern and western states but the percentage of slum population is high in northern states (except Greater Mumbai). 2. Standard of Living also varies widely. Slums of Greater Mumbai and North Indian Cities score fairly high in the index indicating the fact that standard of living is high in those slums compare to the slums in eastern India (Dhanbad, Jamshedpur, Kolkata). Therefore, though Kolkata have relatively lesser percentage of slum population compare to north and south Indian cities but the standard of living in Kolkata’s slums is deplorable. 3. It is interesting to note that even within Kolkata Metropolitan Area slums located in the southern and eastern municipal towns like Rajpur-Sonarpur, Pujali, Diamond Harbour, Baduria and Dankuni have lower standard of living compare to the slums located in the Hooghly Industrial belt like Titagarh, Rishrah, Srerampore etc. Slums of the Hooghly Industrial Belt are older than the slums located in eastern and southern part of the urban agglomeration. 4. Therefore, urban development and emergence of slums should not be the only issue of urban governance but standard of living should be the main focus. Slums located in the main cities like Delhi, Mumbai, Kolkata get more attention from the urban planners and similarly, older slums in a city receives greater political attention compare to the slums of smaller cities and newly emerged slums of the peripheral parts.Keywords: urbanisation, slum, spatial variation, India
Procedia PDF Downloads 35988 The Effects of Circadian Rhythms Change in High Latitudes
Authors: Ekaterina Zvorykina
Abstract:
Nowadays, Arctic and Antarctic regions are distinguished to be one of the most important strategic resources for global development. Nonetheless, living conditions in Arctic regions still demand certain improvements. As soon as the region is rarely populated, one of the main points of interest is health accommodation of the people, who migrate to Arctic region for permanent and shift work. At Arctic and Antarctic latitudes, personnel face polar day and polar night conditions during the time of the year. It means that they are deprived of natural sunlight in winter season and have continuous daylight in summer. Firstly, the change in light intensity during 24-hours period due to migration affects circadian rhythms. Moreover, the controlled artificial light in winter is also an issue. The results of the recent studies on night shift medical professionals, who were exposed to permanent artificial light, have already demonstrated higher risks in cancer, depression, Alzheimer disease. Moreover, people exposed to frequent time zones change are also subjected to higher risks of heart attack and cancer. Thus, our main goals are to understand how high latitude work and living conditions can affect human health and how it can be prevented. In our study, we analyze molecular and cellular factors, which play important role in circadian rhythm change and distinguish main risk groups in people, migrating to high latitudes. The main well-studied index of circadian timing is melatonin or its metabolite 6-sulfatoxymelatonin. In low light intensity melatonin synthesis is disturbed and as a result human organism requires more time for sleep, which is still disregarded when it comes to working time organization. Lack of melatonin also causes shortage in serotonin production, which leads to higher depression risk. Melatonin is also known to inhibit oncogenes and increase apoptosis level in cells, the main factors for tumor growth, as well as circadian clock genes (for example Per2). Thus, people who work in high latitudes can be distinguished as a risk group for cancer diseases and demand more attention. Clock/Clock genes, known to be one of the main circadian clock regulators, decrease sensitivity of hypothalamus to estrogen and decrease glucose sensibility, which leads to premature aging and oestrous cycle disruption. Permanent light exposure also leads to accumulation superoxide dismutase and oxidative stress, which is one of the main factors for early dementia and Alzheimer disease. We propose a new screening system adjusted for people, migrating from middle to high latitudes and accommodation therapy. Screening is focused on melatonin and estrogen levels, sleep deprivation and neural disorders, depression level, cancer risks and heart and vascular disorders. Accommodation therapy includes different types artificial light exposure, additional melatonin and neuroprotectors. Preventive procedures can lead to increase of migration intensity to high latitudes and, as a result, the prosperity of Arctic region.Keywords: circadian rhythm, high latitudes, melatonin, neuroprotectors
Procedia PDF Downloads 15587 A Rapid and Greener Analysis Approach Based on Carbonfiber Column System and MS Detection for Urine Metabolomic Study After Oral Administration of Food Supplements
Authors: Zakia Fatima, Liu Lu, Donghao Li
Abstract:
The analysis of biological fluid metabolites holds significant importance in various areas, such as medical research, food science, and public health. Investigating the levels and distribution of nutrients and their metabolites in biological samples allows researchers and healthcare professionals to determine nutritional status, find hypovitaminosis or hypervitaminosis, and monitor the effectiveness of interventions such as dietary supplementation. Moreover, analysis of nutrient metabolites provides insight into their metabolism, bioavailability, and physiological processes, aiding in the clarification of their health roles. Hence, the exploration of a distinct, efficient, eco-friendly, and simpler methodology is of great importance to evaluate the metabolic content of complex biological samples. In this work, a green and rapid analytical method based on an automated online two-dimensional microscale carbon fiber/activated carbon fiber fractionation system and time-of-flight mass spectrometry (2DμCFs-TOF-MS) was used to evaluate metabolites of urine samples after oral administration of food supplements. The automated 2DμCFs instrument consisted of a microcolumn system with bare carbon fibers and modified carbon fiber coatings. Carbon fibers and modified carbon fibers exhibit different surface characteristics and retain different compounds accordingly. Three kinds of mobile-phase solvents were used to elute the compounds of varied chemical heterogeneities. The 2DμCFs separation system has the ability to effectively separate different compounds based on their polarity and solubility characteristics. No complicated sample preparation method was used prior to analysis, which makes the strategy more eco-friendly, practical, and faster than traditional analysis methods. For optimum analysis results, mobile phase composition, flow rate, and sample diluent were optimized. Water-soluble vitamins, fat-soluble vitamins, and amino acids, as well as 22 vitamin metabolites and 11 vitamin metabolic pathway-related metabolites, were found in urine samples. All water-soluble vitamins except vitamin B12 and vitamin B9 were detected in urine samples. However, no fat-soluble vitamin was detected, and only one metabolite of Vitamin A was found. The comparison with a blank urine sample showed a considerable difference in metabolite content. For example, vitamin metabolites and three related metabolites were not detected in blank urine. The complete single-run screening was carried out in 5.5 minutes with the minimum consumption of toxic organic solvent (0.5 ml). The analytical method was evaluated in terms of greenness, with an analytical greenness (AGREE) score of 0.72. The method’s practicality has been investigated using the Blue Applicability Grade Index (BAGI) tool, obtaining a score of 77. The findings in this work illustrated that the 2DµCFs-TOF-MS approach could emerge as a fast, sustainable, practical, high-throughput, and promising analytical tool for screening and accurate detection of various metabolites, pharmaceuticals, and ingredients in dietary supplements as well as biological fluids.Keywords: metabolite analysis, sustainability, carbon fibers, urine.
Procedia PDF Downloads 2486 Audience Members' Perspective-Taking Predicts Accurate Identification of Musically Expressed Emotion in a Live Improvised Jazz Performance
Authors: Omer Leshem, Michael F. Schober
Abstract:
This paper introduces a new method for assessing how audience members and performers feel and think during live concerts, and how audience members' recognized and felt emotions are related. Two hypotheses were tested in a live concert setting: (1) that audience members’ cognitive perspective taking ability predicts their accuracy in identifying an emotion that a jazz improviser intended to express during a performance, and (2) that audience members' affective empathy predicts their likelihood of feeling the same emotions as the performer. The aim was to stage a concert with audience members who regularly attend live jazz performances, and to measure their cognitive and affective reactions during the performance as non-intrusively as possible. Pianist and Grammy nominee Andy Milne agreed, without knowing details of the method or hypotheses, to perform a full-length solo improvised concert that would include an ‘unusual’ piece. Jazz fans were recruited through typical advertising for New York City jazz performances. The event was held at the New School’s Glass Box Theater, the home of leading NYC jazz venue ‘The Stone.’ Audience members were charged typical NYC jazz club admission prices; advertisements informed them that anyone who chose to participate in the study would be reimbursed their ticket price after the concert. The concert, held in April 2018, had 30 attendees, 23 of whom participated in the study. Twenty-two minutes into the concert, the performer was handed a paper note with the instruction: ‘Perform a 3-5-minute improvised piece with the intention of conveying sadness.’ (Sadness was chosen based on previous music cognition lab studies, where solo listeners were less likely to select sadness as the musically-expressed emotion accurately from a list of basic emotions, and more likely to misinterpret sadness as tenderness). Then, audience members and the performer were invited to respond to a questionnaire from a first envelope under their seat. Participants used their own words to describe the emotion the performer had intended to express, and then to select the intended emotion from a list. They also reported the emotions they had felt while listening using Izard’s differential emotions scale. The concert then continued as usual. At the end, participants answered demographic questions and Davis’ interpersonal reactivity index (IRI), a 28-item scale designed to assess both cognitive and affective empathy. Hypothesis 1 was supported: audience members with greater cognitive empathy were more likely to accurately identify sadness as the expressed emotion. Moreover, audience members who accurately selected ‘sadness’ reported feeling marginally sadder than people who did not select sadness. Hypotheses 2 was not supported; audience members with greater affective empathy were not more likely to feel the same emotions as the performer. If anything, members with lower cognitive perspective-taking ability had marginally greater emotional overlap with the performer, which makes sense given that these participants were less likely to identify the music as sad, which corresponded with the performer’s actual feelings. Results replicate findings from solo lab studies in a concert setting and demonstrate the viability of exploring empathy and collective cognition in improvised live performance.Keywords: audience, cognition, collective cognition, emotion, empathy, expressed emotion, felt emotion, improvisation, live performance, recognized emotion
Procedia PDF Downloads 13185 Influence of the Local External Pressure on Measured Parameters of Cutaneous Microcirculation
Authors: Irina Mizeva, Elena Potapova, Viktor Dremin, Mikhail Mezentsev, Valeri Shupletsov
Abstract:
The local tissue perfusion is regulated by the microvascular tone which is under the control of a number of physiological mechanisms. Laser Doppler flowmetry (LDF) together with wavelet analyses is the most commonly used technique to study the regulatory mechanisms of cutaneous microcirculation. External factors such as temperature, local pressure of the probe on the skin, etc. influence on the blood flow characteristics and are used as physiological tests to evaluate microvascular regulatory mechanisms. Local probe pressure influences on the microcirculation parameters measured by optical methods: diffuse reflectance spectroscopy, fluorescence spectroscopy, and LDF. Therefore, further study of probe pressure effects can be useful to improve the reliability of optical measurement. During pressure tests variation of the mean perfusion measured by means of LDF usually is estimated. An additional information concerning the physiological mechanisms of the vascular tone regulation system in response to local pressure can be obtained using spectral analyses of LDF samples. The aim of the present work was to develop protocol and algorithm of data processing appropriate for study physiological response to the local pressure test. Involving 6 subjects (20±2 years) and providing 5 measurements for every subject we estimated intersubject and-inter group variability of response of both averaged and oscillating parts of the LDF sample on external surface pressure. The final purpose of the work was to find special features which further can be used in wider clinic studies. The cutaneous perfusion measurements were carried out by LAKK-02 (SPE LAZMA Ltd., Russia), the skin loading was provided by the originally designed device which allows one to distribute the pressure around the LDF probe. The probe was installed on the dorsal part of the distal finger of the index figure. We collected measurements continuously for one hour and varied loading from 0 to 180mmHg stepwise with a step duration of 10 minutes. Further, we post-processed the samples using the wavelet transform and traced the energy of oscillations in five frequency bands over time. Weak loading leads to pressure-induced vasodilation, so one should take into account that the perfusion measured under pressure conditions will be overestimated. On the other hand, we revealed a decrease in endothelial associated fluctuations. Further loading (88 mmHg) induces amplification of pulsations in all frequency bands. We assume that such loading leads to a higher number of closed capillaries, higher input of arterioles in the LDF signal and as a consequence more vivid oscillations which mainly are formed in arterioles. External pressure higher than 144 mmHg leads to the decrease of oscillating components, after removing the loading very rapid restore of the tissue perfusion takes place. In this work, we have demonstrated that local skin loading influence on the microcirculation parameters measured by optic technique; this should be taken into account while developing portable electronic devices. The proposed protocol of local loading allows one to evaluate PIV as far as to trace dynamic of blood flow oscillations. This study was supported by the Russian Science Foundation under project N 18-15-00201.Keywords: blood microcirculation, laser Doppler flowmetry, pressure-induced vasodilation, wavelet analyses blood
Procedia PDF Downloads 150