Search results for: health & development
511 Effect of Exercise and Mindfulness on Cognitive and Psycho-Emotional Functioning in Children with ADHD
Authors: Hannah Bigelow, Marcus D. Gottlieb, Michelle Ogrodnik, Jeffrey, D. Graham, Barbara Fenesi
Abstract:
Attention Deficit Hyperactivity Disorder (ADHD) is one of the most common neurodevelopmental disorders affecting approximately 6% of children worldwide. ADHD is characterized by a combination of persistent deficits including impaired inhibitory control, working memory and task-switching. Many children with ADHD also have comorbid mental health issues such as anxiety and depression. There are several treatment options to manage ADHD impairments, including drug and behavioural management therapy, but they all have drawbacks, such as worsening mood disturbances or being inaccessible to certain demographics. Both physical exercise and mindfulness meditation serve as alternative options to potentially help mitigate ADHD symptoms. Although there is extensive support for the benefits of long-term physical exercise or mindfulness meditation programs, there is insufficient research investigating how acute bouts (i.e., single, short bouts) can help children with ADHD. Thus, the current study aimed to understand how single, short bouts of exercise and mindfulness meditation impacts executive functioning and psycho-emotional well-being in children with ADHD, as well as to directly compare the efficacy of these two interventions. The study used a a pre- post-test, within-subjects design to assess the effects of a 10-minute bout of moderate intensity exercise versus a 10-minute bout of mindfulness meditation (versus 10 minutes of a reading control) on the executive functioning and psycho-emotional well-being of 16 children and youth with ADHD aged 10-14 (male=11; White=80%). Participants completed all three interventions: 10 minutes of exercise, 10 minutes of mindfulness meditation, and 10 minutes of reading (control). Executive functioning (inhibitory control, working memory, task-switching) and psycho-emotional well-being (mood, self-efficacy) were assessed before and after each intervention. Mindfulness meditation promoted executive functioning, while exercise enhanced positive mood and self-efficacy. Critically, this work demonstrates that a single, short bout of mindfulness meditation session can promote inhibitory control among children with ADHD. This is especially important for children with ADHD as inhibitory control deficits are among the most pervasive challenges that they face. Furthermore, the current study provides preliminary evidence for the benefit of acute exercise for promoting positive mood and general self-efficacy for children and youth with ADHD. These results may increase the accessibility of acute exercise for children with ADHD, providing guardians and teachers a feasible option to incorporate just 10 minutes of exercise to assist children emotionally. In summary, this research supports the use of acute exercise and mindfulness meditation on varying aspects of executive functioning and psycho-emotional well-being in children and youth with ADHD. This work offers important insight into how behavioural interventions could be personalized according to a child’s needs.Keywords: attention-deficit hyperactivity disorder (ADHD), acute exercise, mindfulness meditation, executive functioning, psycho-emotional well-being
Procedia PDF Downloads 132510 Runoff Estimates of Rapidly Urbanizing Indian Cities: An Integrated Modeling Approach
Authors: Rupesh S. Gundewar, Kanchan C. Khare
Abstract:
Runoff contribution from urban areas is generally from manmade structures and few natural contributors. The manmade structures are buildings; roads and other paved areas whereas natural contributors are groundwater and overland flows etc. Runoff alleviation is done by manmade as well as natural storages. Manmade storages are storage tanks or other storage structures such as soakways or soak pits which are more common in western and European countries. Natural storages are catchment slope, infiltration, catchment length, channel rerouting, drainage density, depression storage etc. A literature survey on the manmade and natural storages/inflow has presented percentage contribution of each individually. Sanders et.al. in their research have reported that a vegetation canopy reduces runoff by 7% to 12%. Nassif et el in their research have reported that catchment slope has an impact of 16% on bare standard soil and 24% on grassed soil on rainfall runoff. Infiltration being a pervious/impervious ratio dependent parameter is catchment specific. But a literature survey has presented a range of 15% to 30% loss of rainfall runoff in various catchment study areas. Catchment length and channel rerouting too play a considerable role in reduction of rainfall runoff. Ground infiltration inflow adds to the runoff where the groundwater table is very shallow and soil saturates even in a lower intensity storm. An approximate percent contribution through this inflow and surface inflow contributes to about 2% of total runoff volume. Considering the various contributing factors in runoff it has been observed during a literature survey that integrated modelling approach needs to be considered. The traditional storm water network models are able to predict to a fair/acceptable degree of accuracy provided no interaction with receiving water (river, sea, canal etc), ground infiltration, treatment works etc. are assumed. When such interactions are significant then it becomes difficult to reproduce the actual flood extent using the traditional discrete modelling approach. As a result the correct flooding situation is very rarely addressed accurately. Since the development of spatially distributed hydrologic model the predictions have become more accurate at the cost of requiring more accurate spatial information.The integrated approach provides a greater understanding of performance of the entire catchment. It enables to identify the source of flow in the system, understand how it is conveyed and also its impact on the receiving body. It also confirms important pain points, hydraulic controls and the source of flooding which could not be easily understood with discrete modelling approach. This also enables the decision makers to identify solutions which can be spread throughout the catchment rather than being concentrated at single point where the problem exists. Thus it can be concluded from the literature survey that the representation of urban details can be a key differentiator to the successful understanding of flooding issue. The intent of this study is to accurately predict the runoff from impermeable areas from urban area in India. A representative area has been selected for which data was available and predictions have been made which are corroborated with the actual measured data.Keywords: runoff, urbanization, impermeable response, flooding
Procedia PDF Downloads 250509 An Audit of Climate Change and Sustainability Teaching in Medical School
Authors: Karolina Wieczorek, Zofia Przypaśniak
Abstract:
Climate change is a rapidly growing threat to global health, and part of the responsibility to combat it lies within the healthcare sector itself, including adequate education of future medical professionals. To mitigate the consequences, the General Medical Council (GMC) has equipped medical schools with a list of outcomes regarding sustainability teaching. Students are expected to analyze the impact of the healthcare sector’s emissions on climate change. The delivery of the related teaching content is, however, often inadequate and insufficient time is devoted for exploration of the topics. Teaching curricula lack in-depth exploration of the learning objectives. This study aims to assess the extent and characteristics of climate change and sustainability subjects teaching in the curriculum of a chosen UK medical school (Barts and The London School of Medicine and Dentistry). It compares the data to the national average scores from the Climate Change and Sustainability Teaching (C.A.S.T.) in Medical Education Audit to draw conclusions about teaching on a regional level. This is a single-center audit of the timetabled sessions of teaching in the medical course. The study looked at the academic year 2020/2021 which included a review of all non-elective, core curriculum teaching materials including tutorials, lectures, written resources, and assignments in all five years of the undergraduate and graduate degrees, focusing only on mandatory teaching attended by all students (excluding elective modules). The topics covered were crosschecked with GMC Outcomes for graduates: “Educating for Sustainable Healthcare – Priority Learning Outcomes” as gold standard to look for coverage of the outcomes and gaps in teaching. Quantitative data was collected in form of time allocated for teaching as proxy of time spent per individual outcomes. The data was collected independently by two students (KW and ZP) who have received prior training and assessed two separate data sets to increase interrater reliability. In terms of coverage of learning outcomes, 12 out of 13 were taught (with the national average being 9.7). The school ranked sixth in the UK for time spent per topic and second in terms of overall coverage, meaning the school has a broad range of topics taught with some being explored in more detail than others. For the first outcome 4 out of 4 objectives covered (average 3.5) with 47 minutes spent per outcome (average 84 min), for the second objective 5 out of 5 covered (average 3.5) with 46 minutes spent (average 20), for the third 3 out of 4 (average 2.5) with 10 mins pent (average 19 min). A disproportionately large amount of time is spent delivering teaching regarding air pollution (respiratory illnesses), which resulted in the topic of sustainability in other specialties being excluded from teaching (musculoskeletal, ophthalmology, pediatrics, renal). Conclusions: Currently, there is no coherent strategy on national teaching of climate change topics and as a result an unstandardized amount of time spent on teaching and coverage of objectives can be observed.Keywords: audit, climate change, sustainability, education
Procedia PDF Downloads 86508 Importance of Macromineral Ratios and Products in Association with Vitamin D in Pediatric Obesity Including Metabolic Syndrome
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Metabolisms of macrominerals, those of calcium, phosphorus and magnesium, are closely associated with the metabolism of vitamin D. Particularly magnesium, the second most abundant intracellular cation, is related to biochemical and metabolic processes in the body, such as those of carbohydrates, proteins and lipids. The status of each mineral was investigated in obesity to some extent. Their products and ratios may possibly give much more detailed information about the matter. The aim of this study is to investigate possible relations between each macromineral and some obesity-related parameters. This study was performed on 235 children, whose ages were between 06-18 years. Aside from anthropometric measurements, hematological analyses were performed. TANITA body composition monitor using bioelectrical impedance analysis technology was used to establish some obesity-related parameters including basal metabolic rate (BMR), total fat, mineral and muscle masses. World Health Organization body mass index (BMI) percentiles for age and sex were used to constitute the groups. The values above 99th percentile were defined as morbid obesity. Those between 95th and 99th percentiles were included into the obese group. The overweight group comprised of children whose percentiles were between 95 and 85. Children between the 85th and 15th percentiles were defined as normal. Metabolic syndrome (MetS) components (waist circumference, fasting blood glucose, triacylglycerol, high density lipoprotein cholesterol, systolic pressure, diastolic pressure) were determined. High performance liquid chromatography was used to determine Vitamin D status by measuring 25-hydroxy cholecalciferol (25-hydroxy vitamin D3, 25(OH)D). Vitamin D values above 30.0 ng/ml were accepted as sufficient. SPSS statistical package program was used for the evaluation of data. The statistical significance degree was accepted as p < 0.05. The important points were the correlations found between vitamin D and magnesium as well as phosphorus (p < 0.05) that existed in the group with normal BMI values. These correlations were lost in the other groups. The ratio of phosphorus to magnesium was even much more highly correlated with vitamin D (p < 0.001). The negative correlation between magnesium and total fat mass (p < 0.01) was confined to the MetS group showing the inverse relationship between magnesium levels and obesity degree. In this group, calcium*magnesium product exhibited the highest correlation with total fat mass (p < 0.001) among all groups. Only in the MetS group was a negative correlation found between BMR and calcium*magnesium product (p < 0.05). In conclusion, magnesium is located at the center of attraction concerning its relationships with vitamin D, fat mass and MetS. The ratios and products derived from macrominerals including magnesium have pointed out stronger associations other than each element alone. Final considerations have shown that unique correlations of magnesium as well as calcium*magnesium product with total fat mass have drawn attention particularly in the MetS group, possibly due to the derangements in some basic elements of carbohydrate as well as lipid metabolism.Keywords: macrominerals, metabolic syndrome, pediatric obesity, vitamin D
Procedia PDF Downloads 114507 Research on Reminiscence Therapy Game Design
Authors: Web Huei Chou, Li Yi Chun, Wenwe Yu, Han Teng Weng, H. Yuan, T. Yang
Abstract:
The prevalence of dementia is estimated to rise to 78 million by 2030 and 139 million by 2050. Among those affected, Alzheimer's disease is the most common form of dementia, contributing to 60–70% of cases. Addressing this growing challenge is crucial, especially considering the impact on older individuals and their caregivers. To reduce the behavioral and psychological symptoms of dementia, people with dementia use a variety of pharmaceutical and non-pharmacological treatments, and some studies have found the use of non-pharmacological interventions. Treatment of depression, cognitive function, and social activities has potential benefits. Butler developed reminiscence therapy as a method of treating dementia. Through ‘life review,’ individuals can recall their past events, activities, and experiences, which can reduce the depression of the elderly and improve their Quality of life to help give meaning to their lives and help them live independently. The life review process uses a variety of memory triggers, such as household items, past objects, photos, and music, and can be conducted collectively or individually and structured or unstructured. However, despite the advantages of nostalgia therapy, past research has always pointed out that current research lacks rigorous experimental evaluation and cannot describe clear research results and generalizability. Therefore, this study aims to study physiological sensing experiments to find a feasible experimental and verification method to provide clearer design and design specifications for reminiscence therapy and to provide a more widespread application for healthy aging. This study is an ongoing research project, a collaboration between the School of Design at Yunlin University of Science and Technology in Taiwan and the Department of Medical Engineering at Chiba University in Japan. We use traditional rice dishes from Taiwan and Japan as nostalgic content to construct a narrative structure for the elderly in the two countries respectively for life review activities, providing an easy-to-carry nostalgic therapy game with an intuitive interactive design. This experiment is expected to be completed in 36 months. The design team constructed and designed the game after conducting literary and historical data surveys and interviews with elders to confirm the nostalgic historical data in Taiwan and Japan. The Japanese team planned the Electrodermal Activity (EDA) and Blood Volume Pulse (BVP) experimental environments and Data calculation model, and then after conducting experiments on elderly people in two places, the research results were analyzed and discussed together. The research has completed the first 24 months of pre-study, design work, and pre-study and has entered the project acceptance stage.Keywords: reminiscence therapy, aging health, design research, life review
Procedia PDF Downloads 33506 Evaluation of Requests and Outcomes of Magnetic Resonance Imaging Assessing for Cauda Equina Syndrome at a UK Trauma Centre
Authors: Chris Cadman, Marcel Strauss
Abstract:
Background: In 2020, the University Hospital Wishaw in the United Kingdom became the centre for trauma and orthopaedics within its health board. This resulted in the majority of patients with suspected cauda equina syndrome (CES) being assessed and imaged at this site, putting an increased demand on MR imaging and displacing other previous activity. Following this transition, imaging requests for CES did not always follow national guidelines and would often be missing important clinical and safety information. There also appeared to be a very low positive scan rate compared with previously reported studies. In an attempt to improve patient selection and reduce the burden of CES imaging at this site clinical audit was performed. Methods: A total of 250 consecutive patients imaged to assess for CES were evaluated. Patients had to have presented to either the emergency or orthopaedic department acutely with a presenting complaint of suspected CES. Patients were excluded if they were not admitted acutely or were assessed by other clinical specialities. In total, 233 patients were included. Requests were assessed for appropriate clinical history, accurate and complete clinical assessment and MRI safety information. Clinical assessment was allocated a score of 1-6 based on information relating to history of pain, level of pain, dermatomes/myotomes affected, peri-anal paraesthesia/anaesthesia, anal tone and post-void bladder volume with each element scoring one point. Images were assessed for positive findings of CES, acquired spinal stenosis or nerve root compression. Results: Overall, 73% of requests had a clear clinical history of CES. The urgency of the request for imaging was given in 23% of cases. The mean clinical assessment score was 3.7 out of a total of 6. Overall, 2% of scans were positive for CES, 29% had acquired spinal stenosis and 30% had nerve root compression. For patients with CES, 75% had acute neurological signs compared with 68% of the study population. CES patients had a mean clinical history score of 5.3 compared with 3.7 for the study population. Overall, 95% of requests had appropriate MRI safety information. Discussion: it study included 233 patients who underwent specialist assessment and referral for MR imaging for suspected CES. Despite the serious nature of this condition, a large proportion of imaging requests did not have a clear clinical query of CES and the level of urgency was not given, which could potentially lead to a delay in imaging and treatment. Clinical examination was often also incomplete, which can make triaging of patients presenting with similar symptoms challenging. The positive rate for CES was only 2%, much below other studies which had positive rates of 6–40% with a large meta-analysis finding a mean positive rate of 19%. These findings demonstrate an opportunity to improve the quality of imaging requests for suspected CES. This may help to improve patient selection for imaging and result in a positive rate for CES imaging that is more in line with other centres.Keywords: cauda equina syndrome, acute back pain, MRI, spine
Procedia PDF Downloads 11505 Social and Economic Aspects of Unlikely but Still Possible Welfare to Work Transitions from Long-Term Unemployed
Authors: Andreas Hirseland, Lukas Kerschbaumer
Abstract:
In Germany, during the past years there constantly are about one million long term unemployed who did not benefit from the prospering labor market while most short term unemployed did. Instead, they are continuously dependent on welfare and sometimes precarious short-term employment, experiencing work poverty. Long term unemployment thus turns into a main obstacle to regular employment, especially if accompanied by other impediments such as low level education (school/vocational), poor health (especially chronical illness), advanced age (older than fifty), immigrant status, motherhood or engagement in care for other relatives. Almost two thirds of all welfare recipients have multiple impediments which hinder a successful transition from welfare back to sustainable and sufficient employment. Hiring them is often considered as an investment too risky for employers. Therefore formal application schemes based on formal qualification certificates and vocational biographies might reduce employers’ risks but at the same time are not helpful for long-term unemployed and welfare recipients. The panel survey ‘Labor market and social security’ (PASS; ~15,000 respondents in ~10,000 households), carried out by the Institute of Employment Research (the research institute of the German Federal Labor Agency), shows that their chance to get back to work tends to fall to nil. Only 66 cases of such unlikely transitions could be observed. In a sequential explanatory mixed-method study, the very scarce ‘success stories’ of unlikely transitions from long term unemployment to work were explored by qualitative inquiry – in-depth interviews with a focus on biography accompanied by qualitative network techniques in order to get a more detailed insight of relevant actors involved in the processes which promote the transition from being a welfare recipient to work. There is strong evidence that sustainable transitions are influenced by biographical resources like habits of network use, a set of informal skills and particularly a resilient way of dealing with obstacles, combined with contextual factors rather than by job-placement procedures promoted by Job-Centers according to activation rules or by following formal paths of application. On the employer’s side small and medium-sized enterprises are often found to give job opportunities to a wider variety of applicants, often based on a slow but steadily increasing relationship leading to employment. According to these results it is possible to show and discuss some limitations of (German) activation policies targeting welfare dependency and long-term unemployment. Based on these findings, indications for more supportive small scale measures in the field of labor-market policies are suggested to help long-term unemployed with multiple impediments to overcome their situation.Keywords: against-all-odds, economic sociology, long-term unemployment, mixed-methods
Procedia PDF Downloads 238504 Brain-Derived Neurotrophic Factor and It's Precursor ProBDNF Serum Levels in Adolescents with Mood Disorders: 2-Year Follow-Up Study
Authors: M. Skibinska, A. Rajewska-Rager, M. Dmitrzak-Weglarz, N. Lepczynska, P. Sibilski, P. Kapelski, J. Pawlak, J. Twarowska-Hauser
Abstract:
Introduction: Neurotrophic factors have been implicated in neuropsychiatric disorders. Brain-Derived Neurotrophic Factor (BDNF) influences neuron differentiation in development as well as synaptic plasticity and neuron survival in adulthood. BDNF is widely studied in mood disorders and has been proposed as a biomarker for depression. BDNF is synthesized as precursor protein – proBDNF. Both forms are biologically active and exert opposite effects on neurons. Aim: The aim of the study was to examine the serum levels of BDNF and proBDNF in unipolar and bipolar young patients below 24 years old during hypo/manic, depressive episodes and in remission compared to healthy control group. Methods: In a prospective 2 years follow-up study, we investigated alterations in levels of BDNF and proBDNF in 79 patients (23 males, mean age 19.08, SD 3.3 and 56 females, mean age 18.39, SD 3.28) diagnosed with mood disorders: unipolar and bipolar disorder compared with 35 healthy control subjects (7 males, mean age 20.43, SD 4.23 and 28 females, mean age 21.25, SD 2.11). Clinical characteristics including mood, comorbidity, family history, and treatment, were evaluated during control visits and clinical symptoms were rated using the Hamilton Depression Rating Scale and Young Mania Rating Scale. Serum BDNF and proBDNF concentrations were determined by Enzyme-Linked Immunosorbent Assays (ELISA) method. Serum BDNF and proBDNF levels were analysed with covariates: sex, age, age > 18 and < 18 years old, family history of affective disorders, drug-free vs. medicated status. Normality of the data was tested using Shapiro-Wilk test. Levene’s test was used to calculate homogeneity of variance. Non-parametric Tests: Mann-Whitney U test, Kruskal-Wallis ANOVA, Friedman’s ANOVA, Wilcoxon signed rank test, Spearman correlation coefficient were applied in analyses The statistical significance level was set at p < 0.05. Results: BDNF and proBDNF serum levels did not differ between patients at baseline and controls as well as comparing patients in acute episode of depression/hypo/mania at baseline and euthymia (at month 3 or 6). Comparing BDNF and proBDNF levels between patients in euthymia and control group no differences have been found. Increased BDNF level in women compared to men at baseline (p=0.01) have been observed. BDNF level at baseline was negatively correlated with depression and mania occurence at 24 month (p=0.04). BDNF level at 12 month was negatively correlated with depression and mania occurence at 12 month (p=0.01). Correlation of BDNF level with sex have been detected (p=0.01). proBDNF levels at month 3, 6 and 12 negatively correlated with disease status (p=0.02, p=0.008, p=0.009, respectively). No other correlations of BDNF and proBDNF levels with clinical and demographical variables have been detected. Discussion: Our results did not show any differences in BDNF and proBDNF levels between depression, mania, euthymia, and controls. Imbalance in BDNF/proBDNF signalling may be involved in pathogenesis of mood disorders. Further studies on larger groups are recommended. Grant was founded by National Science Center in Poland no 2011/03/D/NZ5/06146.Keywords: bipolar disorder, Brain-Derived Neurotrophic Factor (BDNF), proBDNF, unipolar depression
Procedia PDF Downloads 244503 Influence of Dietary Boron on Gut Absorption of Nutrients, Blood Metabolites and Tissue Pathology
Authors: T. Vijay Bhasker, N. K. S Gowda, P. Krishnamoorthy, D. T. Pal, A. K. Pattanaik, A. K. Verma
Abstract:
Boron (B) is a newer trace element and its biological importance and dietary essentiality is unclear in animals. The available literature suggests its putative role in bone mineralization, antioxidant status and steroid hormone synthesis. A feeding trial was conducted in Wister strain (Rattus norvegicus) albino rats for duration of 90 days. A total of 84 healthy weaned (3-4 weeks) experimental rats were randomly divided into 7 dietary groups (4 replicates of three each) viz., A (Basal diet/ Control), B (Basal diet + 5 ppm B), C (Basal diet + 10 ppm B), D (Basal diet + 20 ppm B), E (Basal diet + 40 ppm B), F (Basal diet-Ca 50%), G (Basal diet-Ca 50% + 40 ppm B). Dietary level of calcium (Ca) was maintained at two levels, 100% and 50% of requirement. Sodium borate was used as source of boron along with other ingredients of basal diet while preparing the pelletized diets. All the rats were kept in proper ventilated laboratory animal house maintained at temperature (23±2º C) and humidity (50 to 70%). At the end of experiment digestibility trial was conducted for 5 days to estimate nutrient digestibility and gut absorption of minerals. Eight rats from each group were sacrificed to collect the vital organs (liver, kidney and spleen) to study histopathology. Blood sample was drawn by heart puncture to determine biochemical profile. The average daily feed intake (g/rat/day), water intake (ml/rat/day) and body weight gain (g/rat/day) were similar among the dietary groups. The digestibility (%) of organic matter and crude fat were significantly improved (P < 0.05) was by B supplementation. The gut absorption (%) Ca was significantly increased (P < 0.01) in B supplemented groups compared to control. However, digestibility of dry matter and crude protein, gut absorption of magnesium and phosphorus showed a non-significant increasing trend with B supplementation. The gut absorption (%) of B (P < 0.01) was significantly lowered (P<0.05) in supplemented groups compared to un-supplemented ones. The serum level of triglycerides (mg/dL), HDL-cholesterol (mg/dL) and alanine transaminase (IU/L) were significantly lowered (P < 0.05) in B supplemented groups. While serum level of glucose (mg/dL) and alkaline phosphatase (KA units) showed a non-significant decreasing trend with B supplementation. However the serum levels of total cholesterol (mg/dL) and aspartate transaminase (IU/L) were similar among dietary groups. The histology sections of kidney and spleen revealed no significant changes among the dietary groups and were observed to be normal in anatomical architecture. However, the liver histology revealed cell degenerative changes with vacuolar degeneration and nuclear condensation in Ca deficient groups. But the comparative degenerative changes were mild in 40 ppm B supplemented Ca deficient group. In conclusion, dietary supplementation of graded levels of boron in rats had a positive effect on metabolism and health by improving nutrient digestibility and gut absorption of Ca. This indicates the beneficial role of dietary boron supplementation.Keywords: boron, calcium, nutrient utilization, histopathology
Procedia PDF Downloads 318502 Estimation of Particle Number and Mass Doses Inhaled in a Busy Street in Lublin, Poland
Authors: Bernard Polednik, Adam Piotrowicz, Lukasz Guz, Marzenna Dudzinska
Abstract:
Transportation is considered to be responsible for increased exposure of road users – i.e., drivers, car passengers, and pedestrians as well as inhabitants of houses located near roads - to pollutants emitted from vehicles. Accurate estimates are, however, difficult as exposure depends on many factors such as traffic intensity or type of fuel as well as the topography and the built-up area around the individual routes. The season and weather conditions are also of importance. In the case of inhabitants of houses located near roads, their exposure depends on the distance from the road, window tightness and other factors that decrease pollutant infiltration. This work reports the variations of particle concentrations along a selected road in Lublin, Poland. Their impact on the exposure for road users as well as for inhabitants of houses located near the road is also presented. Mobile and fixed-site measurements were carried out in peak (around 8 a.m. and 4 p.m.) and off-peak (12 a.m., 4 a.m., and 12 p.m.) traffic times in all 4 seasons. Fixed-site measurements were performed in 12 measurement points along the route. The number and mass concentration of particles was determined with the use of P-Trak model 8525, OPS 3330, DustTrak DRX model 8533 (TSI Inc. USA) and Grimm Aerosol Spectrometer 1.109 with Nano Sizer 1.321 (Grimm Aerosol Germany). The obtained results indicated that the highest concentrations of traffic-related pollution were measured near 4-way traffic intersections during peak hours in the autumn and winter. The highest average number concentration of ultrafine particles (PN0.1), and mass concentration of fine particles (PM2.5) in fixed-site measurements were obtained in the autumn and amounted to 23.6 ± 9.2×10³ pt/cm³ and 135.1 ± 11.3 µg/m³, respectively. The highest average number concentration of submicrometer particles (PN1) was measured in the winter and amounted to 68 ± 26.8×10³ pt/cm³. The estimated doses of particles deposited in the commuters’ and pedestrians’ lungs within an hour near 4-way TIs in peak hours in the summer amounted to 4.3 ± 3.3×10⁹ pt/h (PN0.1) and 2.9 ± 1.4 µg/h (PM2.5) and 3.9 ± 1.1×10⁹ pt/h (PN0.1) or 2.5 ± 0.4 µg/h (PM2.5), respectively. While estimating the doses inhaled by the inhabitants of premises located near the road one should take into account different fractional penetration of particles from outdoors to indoors. Such doses assessed for the autumn and winter are up to twice as high as the doses inhaled by commuters and pedestrians in the summer. In the winter traffic-related ultrafine particles account for over 70% of all ultrafine particles deposited in the pedestrians’ lungs. The share of traffic-related PM10 particles was estimated at approximately 33.5%. Concluding, the results of the particle concentration measurements along a road in Lublin indicated that the concentration is mainly affected by the traffic intensity and weather conditions. Further detailed research should focus on how the season and the metrological conditions affect concentration levels of traffic-related pollutants and the exposure of commuters and pedestrians as well as the inhabitants of houses located near traffic routes.Keywords: air quality, deposition dose, health effects, vehicle emissions
Procedia PDF Downloads 95501 Study of the Diaphragm Flexibility Effect on the Inelastic Seismic Response of Thin Wall Reinforced Concrete Buildings (TWRCB): A Purpose to Reduce the Uncertainty in the Vulnerability Estimation
Authors: A. Zapata, Orlando Arroyo, R. Bonett
Abstract:
Over the last two decades, the growing demand for housing in Latin American countries has led to the development of construction projects based on low and medium-rise buildings with thin reinforced concrete walls. This system, known as Thin Walls Reinforced Concrete Buildings (TWRCB), uses walls with thicknesses from 100 to 150 millimetres, with flexural reinforcement formed by welded wire mesh (WWM) with diameters between 5 and 7 millimetres, arranged in one or two layers. These walls often have irregular structural configurations, including combinations of rectangular shapes. Experimental and numerical research conducted in regions where this structural system is commonplace indicates inherent weaknesses, such as limited ductility due to the WWM reinforcement and thin element dimensions. Because of its complexity, numerical analyses have relied on two-dimensional models that don't explicitly account for the floor system, even though it plays a crucial role in distributing seismic forces among the resilient elements. Nonetheless, the numerical analyses assume a rigid diaphragm hypothesis. For this purpose, two study cases of buildings were selected, low-rise and mid-rise characteristics of TWRCB in Colombia. The buildings were analyzed in Opensees using the MVLEM-3D for walls and shell elements to simulate the slabs to involve the effect of coupling diaphragm in the nonlinear behaviour. Three cases are considered: a) models without a slab, b) models with rigid slabs, and c) models with flexible slabs. An incremental static (pushover) and nonlinear dynamic analyses were carried out using a set of 44 far-field ground motions of the FEMA P-695, scaled to 1.0 and 1.5 factors to consider the probability of collapse for the design base earthquake (DBE) and the maximum considered earthquake (MCE) for the model, according to the location sites and hazard zone of the archetypes in the Colombian NSR-10. Shear base capacity, maximum displacement at the roof, walls shear base individual demands and probabilities of collapse were calculated, to evaluate the effect of absence, rigid and flexible slabs in the nonlinear behaviour of the archetype buildings. The pushover results show that the building exhibits an overstrength between 1.1 to 2 when the slab is considered explicitly and depends on the structural walls plan configuration; additionally, the nonlinear behaviour considering no slab is more conservative than if the slab is represented. Include the flexible slab in the analysis remarks the importance to consider the slab contribution in the shear forces distribution between structural elements according to design resistance and rigidity. The dynamic analysis revealed that including the slab reduces the collapse probability of this system due to have lower displacements and deformations, enhancing the safety of residents and the seismic performance. The strategy of including the slab in modelling is important to capture the real effect on the distribution shear forces in walls due to coupling to estimate the correct nonlinear behaviour in this system and the adequate distribution to proportionate the correct resistance and rigidity of the elements in the design to reduce the possibility of damage to the elements during an earthquake.Keywords: thin wall reinforced concrete buildings, coupling slab, rigid diaphragm, flexible diaphragm
Procedia PDF Downloads 75500 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 72499 Overview of Research Contexts about XR Technologies in Architectural Practice
Authors: Adeline Stals
Abstract:
The transformation of architectural design practices has been underway for almost forty years due to the development and democratization of computer technology. New and more efficient tools are constantly being proposed to architects, amplifying a technological wave that sometimes stimulates them, sometimes overwhelms them, depending essentially on their digital culture and the context (socio-economic, structural, organizational) in which they work on a daily basis. Our focus is on VR, AR, and MR technologies dedicated to architecture. The commercialization of affordable headsets like the Oculus Rift, the HTC Vive or more low-tech like the Google CardBoard, makes it more accessible to benefit from these technologies. In that regard, researchers report the growing interest of these tools for architects, given the new perspectives they open up in terms of workflow, representation, collaboration, and client’s involvement. However, studies rarely mention the consequences of the sample studied on results. Our research provides an overview of VR, AR, and MR researches among a corpus of papers selected from conferences and journals. A closer look at the sample of these research projects highlights the necessity to take into consideration the context of studies in order to develop tools truly dedicated to the real practices of specific architect profiles. This literature review formalizes milestones for future challenges to address. The methodology applied is based on a systematic review of two sources of publications. The first one is the Cumincad database, which regroups publications from conferences exclusively about digital in architecture. Additionally, the second part of the corpus is based on journal publications. Journals have been selected considering their ranking on Scimago. Among the journals in the predefined category ‘architecture’ and in Quartile 1 for 2018 (last update when consulted), we have retained the ones related to the architectural design process: Design Studies, CoDesign, Architectural Science Review, Frontiers of Architectural Research and Archnet-IJAR. Beside those journals, IJAC, not classified in the ‘architecture’ category, is selected by the author for its adequacy with architecture and computing. For all requests, the search terms were ‘virtual reality’, ‘augmented reality’, and ‘mixed reality’ in title and/or keywords for papers published between 2015 and 2019 (included). This frame time is defined considering the fast evolution of these technologies in the past few years. Accordingly, the systematic review covers 202 publications. The literature review on studies about XR technologies establishes the state of the art of the current situation. It highlights that studies are mostly based on experimental contexts with controlled conditions (pedagogical, e.g.) or on practices established in large architectural offices of international renown. However, few studies focus on the strategies and practices developed by offices of smaller size, which represent the largest part of the market. Indeed, a European survey studying the architectural profession in Europe in 2018 reveals that 99% of offices are composed of less than ten people, and 71% of only one person. The study also showed that the number of medium-sized offices is continuously decreasing in favour of smaller structures. In doing so, a frontier seems to remain between the worlds of research and practice, especially for the majority of small architectural practices having a modest use of technology. This paper constitutes a reference for the next step of the research and for further worldwide researches by facilitating their contextualization.Keywords: architectural design, literature review, SME, XR technologies
Procedia PDF Downloads 110498 Foucault and Governmentality: International Organizations and State Power
Authors: Sara Dragisic
Abstract:
Using the theoretical analysis of the birth of biopolitics that Foucault performed through the history of liberalism and neoliberalism, in this paper we will try to show how, precisely through problematizing the role of international institutions, the model of governance differs from previous ways of objectifying body and life. Are the state and its mechanisms still a Leviathan to fight against, or can it be even the driver of resistance against the proponents of modern governance and the biopolitical power? Do paradigmatic examples of biopolitics still appear through sovereignty and (international) law, or is it precisely this sphere that shows a significant dose of incompetence and powerlessness in relation to, not only the economic sphere (Foucault’s critique of neoliberalism) but also the new politics of freedom? Have the struggle for freedom and human rights, as well as the war on terrorism, opened a new spectrum of biopolitical processes, which are manifested precisely through new international institutions and humanitarian discourse? We will try to answer these questions, in the following way. On the one hand, we will show that the views of authors such as Agamben and Hardt and Negri, in whom the state and sovereignty are seen as enemies to be defeated or overcome, fail to see how such attempts could translate into the politicization of life like it is done in many examples through the doctrine of liberal interventionism and humanitarianism. On the other hand, we will point out that it is precisely the humanitarian discourse and the defense of the right to intervention that can be the incentive and basis for the politicization of the category of life and lead to the selective application of human rights. Zizek example of the killing of United Nations workers and doctors in a village during the Vietnam War, who were targeted even before police or soldiers, because they were precisely seen as a powerful instrument of American imperialism (as they were sincerely trying to help the population), will be focus of this part of the analysis. We’ll ask the question whether such interpretation is a kind of liquidation of the extreme left of the political (Laclau) or on this basis can be explained at least in part the need to review the functioning of international organizations, ranging from those dealing with humanitarian aid (and humanitarian military interventions) to those dealing with protection and the security of the population, primarily from growing terrorism. Based on the above examples, we will also explain how the discourse of terrorism itself plays a dual role: it can appear as a tool of liberal biopolitics, although, more superficially, it mostly appears as an enemy that wants to destroy the liberal system and its values. This brings us to the basic problem that this paper will tackle: do the mechanisms of institutional struggle for human rights and freedoms, which is often seen as opposed to the security mechanisms of the state, serve the governance of citizens in such a way that the latter themselves participate in producing biopolitical governmental practices? Is the freedom today "nothing but the correlative development of apparatuses of security" (Foucault)? Or, we can continue this line of Foucault’s argumentation and enhance the interpretation with the important question of what precisely today reflects the change in the rationality of governance in which society is transformed from a passive object into a subject of its own production. Finally, in order to understand the skills of biopolitical governance in modern civil society, it is necessary to pay attention to the status of international organizations, which seem to have become a significant place for the implementation of global governance. In this sense, the power of sovereignty can turn out to be an insufficiently strong power of security policy, which can go hand in hand with freedom policies, through neoliberal governmental techniques.Keywords: neoliberalism, Foucault, sovereignty, biopolitics, international organizations, NGOs, Agamben, Hardt&Negri, Zizek, security, state power
Procedia PDF Downloads 206497 Pedagogical Opportunities of Physics Education Technology Interactive Simulations for Secondary Science Education in Bangladesh
Authors: Mohosina Jabin Toma, Gerald Tembrevilla, Marina Milner-Bolotin
Abstract:
Science education in Bangladesh is losing its appeal at an alarming rate due to the lack of science laboratory equipment, excessive teacher-student ratio, and outdated teaching strategies. Research-based educational technologies aim to address some of the problems faced by teachers who have limited access to laboratory resources, like many Bangladeshi teachers. Physics Education Technology (PhET) research team has been developing science and mathematics interactive simulations to help students develop deeper conceptual understanding. Still, PhET simulations are rarely used in Bangladesh. The purpose of this study is to explore Bangladeshi teachers’ challenges in learning to implement PhET-enhanced pedagogies and examine teachers’ views on PhET’s pedagogical opportunities in secondary science education. Since it is a new technology for Bangladesh, seven workshops on PhET were conducted in Dhaka city for 129 in-service and pre-service teachers in the winter of 2023 prior to data collection. This study followed an explanatory mixed method approach that included a pre-and post-workshop survey and five semi-structured interviews. Teachers participated in the workshops voluntarily and shared their experiences at the end. Teachers’ challenges were also identified from workshop discussions and observations. The interviews took place three to four weeks after the workshop and shed light on teachers’ experiences of using PhET in actual classroom settings. The results suggest that teachers had difficulty handling new technology; hence, they recommended preparing a booklet and Bengali YouTube videos on PhET to assist them in overcoming their struggles. Teachers also faced challenges in using any inquiry-based learning approach due to the content-loaded curriculum and exam-oriented education system, as well as limited experience with inquiry-based education. The short duration of classes makes it difficult for them to design PhET activities. Furthermore, considering limited access to computers and the internet in school, teachers think PhET simulations can bring positive changes if used in homework activities. Teachers also think they lack pedagogical skills and sound content knowledge to take full advantage of PhET. They highly appreciated the workshops and proposed that the government designs some teacher training modules on how to incorporate PhET simulations. Despite all the challenges, teachers believe PhET can enhance student learning, ensure student engagement and increase student interest in STEM Education. Considering the lack of science laboratory equipment, teachers recognized the potential of PhET as a supplement to hands-on activities for secondary science education in Bangladesh. They believed that if PhET develops more curriculum-relevant sims, it will bring revolutionary changes to how Bangladeshi students learn science. All the participating teachers in this study came from two organizations, and all the workshops took place in urban areas; therefore, the findings cannot be generalized to all secondary science teachers. A nationwide study is required to include teachers from diverse backgrounds. A further study can shed light on how building a professional learning community can lessen teachers’ challenges in incorporating PhET-enhanced pedagogy in their teaching.Keywords: educational technology, inquiry-based learning, PhET interactive simulations, PhET-enhanced pedagogies, science education, science laboratory equipment, teacher professional development
Procedia PDF Downloads 95496 Multifield Problems in 3D Structural Analysis of Advanced Composite Plates and Shells
Authors: Salvatore Brischetto, Domenico Cesare
Abstract:
Major improvements in future aircraft and spacecraft could be those dependent on an increasing use of conventional and unconventional multilayered structures embedding composite materials, functionally graded materials, piezoelectric or piezomagnetic materials, and soft foam or honeycomb cores. Layers made of such materials can be combined in different ways to obtain structures that are able to fulfill several structural requirements. The next generation of aircraft and spacecraft will be manufactured as multilayered structures under the action of a combination of two or more physical fields. In multifield problems for multilayered structures, several physical fields (thermal, hygroscopic, electric and magnetic ones) interact each other with different levels of influence and importance. An exact 3D shell model is here proposed for these types of analyses. This model is based on a coupled system including 3D equilibrium equations, 3D Fourier heat conduction equation, 3D Fick diffusion equation and electric and magnetic divergence equations. The set of partial differential equations of second order in z is written using a mixed curvilinear orthogonal reference system valid for spherical and cylindrical shell panels, cylinders and plates. The order of partial differential equations is reduced to the first one thanks to the redoubling of the number of variables. The solution in the thickness z direction is obtained by means of the exponential matrix method and the correct imposition of interlaminar continuity conditions in terms of displacements, transverse stresses, electric and magnetic potentials, temperature, moisture content and transverse normal multifield fluxes. The investigated structures have simply supported sides in order to obtain a closed form solution in the in-plane directions. Moreover, a layerwise approach is proposed which allows a 3D correct description of multilayered anisotropic structures subjected to field loads. Several results will be proposed in tabular and graphical formto evaluate displacements, stresses and strains when mechanical loads, temperature gradients, moisture content gradients, electric potentials and magnetic potentials are applied at the external surfaces of the structures in steady-state conditions. In the case of inclusions of piezoelectric and piezomagnetic layers in the multilayered structures, so called smart structures are obtained. In this case, a free vibration analysis in open and closed circuit configurations and a static analysis for sensor and actuator applications will be proposed. The proposed results will be useful to better understand the physical and structural behaviour of multilayered advanced composite structures in the case of multifield interactions. Moreover, these analytical results could be used as reference solutions for those scientists interested in the development of 3D and 2D numerical shell/plate models based, for example, on the finite element approach or on the differential quadrature methodology. The correct impositions of boundary geometrical and load conditions, interlaminar continuity conditions and the zigzag behaviour description due to transverse anisotropy will be also discussed and verified.Keywords: composite structures, 3D shell model, stress analysis, multifield loads, exponential matrix method, layer wise approach
Procedia PDF Downloads 67495 Degradation of Diclofenac in Water Using FeO-Based Catalytic Ozonation in a Modified Flotation Cell
Authors: Miguel A. Figueroa, José A. Lara-Ramos, Miguel A. Mueses
Abstract:
Pharmaceutical residues are a section of emerging contaminants of anthropogenic origin that are present in a myriad of waters with which human beings interact daily and are starting to affect the ecosystem directly. Conventional waste-water treatment systems are not capable of degrading these pharmaceutical effluents because their designs cannot handle the intermediate products and biological effects occurring during its treatment. That is why it is necessary to hybridize conventional waste-water systems with non-conventional processes. In the specific case of an ozonation process, its efficiency highly depends on a perfect dispersion of ozone, long times of interaction of the gas-liquid phases and the size of the ozone bubbles formed through-out the reaction system. In order to increase the efficiency of these parameters, the use of a modified flotation cell has been proposed recently as a reactive system, which is used at an industrial level to facilitate the suspension of particles and spreading gas bubbles through the reactor volume at a high rate. The objective of the present work is the development of a mathematical model that can closely predict the kinetic rates of reactions taking place in the flotation cell at an experimental scale by means of identifying proper reaction mechanisms that take into account the modified chemical and hydrodynamic factors in the FeO-catalyzed Ozonation of Diclofenac aqueous solutions in a flotation cell. The methodology is comprised of three steps: an experimental phase where a modified flotation cell reactor is used to analyze the effects of ozone concentration and loading catalyst over the degradation of Diclofenac aqueous solutions. The performance is evaluated through an index of utilized ozone, which relates the amount of ozone supplied to the system per milligram of degraded pollutant. Next, a theoretical phase where the reaction mechanisms taking place during the experiments must be identified and proposed that details the multiple direct and indirect reactions the system goes through. Finally, a kinetic model is obtained that can mathematically represent the reaction mechanisms with adjustable parameters that can be fitted to the experimental results and give the model a proper physical meaning. The expected results are a robust reaction rate law that can simulate the improved results of Diclofenac mineralization on water using the modified flotation cell reactor. By means of this methodology, the following results were obtained: A robust reaction pathways mechanism showcasing the intermediates, free-radicals and products of the reaction, Optimal values of reaction rate constants that simulated Hatta numbers lower than 3 for the system modeled, degradation percentages of 100%, TOC (Total organic carbon) removal percentage of 69.9 only requiring an optimal value of FeO catalyst of 0.3 g/L. These results showed that a flotation cell could be used as a reactor in ozonation, catalytic ozonation and photocatalytic ozonation processes, since it produces high reaction rate constants and reduces mass transfer limitations (Ha > 3) by producing microbubbles and maintaining a good catalyst distribution.Keywords: advanced oxidation technologies, iron oxide, emergent contaminants, AOTS intensification
Procedia PDF Downloads 112494 Bariatric Surgery Referral as an Alternative to Fundoplication in Obese Patients Presenting with GORD: A Retrospective Hospital-Based Cohort Study
Authors: T. Arkle, D. Pournaras, S. Lam, B. Kumar
Abstract:
Introduction: Fundoplication is widely recognised as the best surgical option for gastro-oesophageal reflux disease (GORD) in the general population. However, there is controversy surrounding the use of conventional fundoplication in obese patients. Whilst the intra-operative failure of fundoplication, including wrap disruption, is reportedly higher in obese individuals, the more significant issue surrounds symptom recurrence post-surgery. Could a bariatric procedure be considered in obese patients for weight management, to treat the GORD, and to also reduce the risk of recurrence? Roux-en-Y gastric bypass, a widely performed bariatric procedure, has been shown to be highly successful both in controlling GORD symptoms and in weight management in obese patients. Furthermore, NICE has published clear guidelines on eligibility for bariatric surgery, with the main criteria being type 3 obesity or type 2 obesity with the presence of significant co-morbidities that would improve with weight loss. This study aims to identify the proportion of patients who undergo conventional fundoplication for GORD and/or hiatus hernia, which would have been eligible for bariatric surgery referral according to NICE guidelines. Methods: All patients who underwent fundoplication procedures for GORD and/or hiatus hernia repair at a single NHS foundation trust over a 10-year period will be identified using the Trust’s health records database. Pre-operative patient records will be used to find BMI and the presence of significant co-morbidities at the time of consideration for surgery. This information will be compared to NICE guidelines to determine potential eligibility for the bariatric surgical referral at the time of initial surgical intervention. Results: A total of 321 patients underwent fundoplication procedures between January 2011 and December 2020; 133 (41.4%) had available data for BMI or to allow BMI to be estimated. Of those 133, 40 patients (30%) had a BMI greater than 30kg/m², and 7 (5.3%) had BMI >35kg/m². One patient (0.75%) had a BMI >40 and would therefore be automatically eligible according to NICE guidelines. 4 further patients had significant co-morbidities, such as hypertension and osteoarthritis, which likely be improved by weight management surgery and therefore also indicated eligibility for referral. Overall, 3.75% (5/133) of patients undergoing conventional fundoplication procedures would have been eligible for bariatric surgical referral, these patients were all female, and the average age was 60.4 years. Conclusions: Based on this Trust’s experience, around 4% of obese patients undergoing fundoplication would have been eligible for bariatric surgical intervention. Based on current evidence, in class 2/3 obese patients, there is likely to have been a notable proportion with recurrent disease, potentially requiring further intervention. These patient’s may have benefitted more through undergoing bariatric surgery, for example a Roux-en-Y gastric bypass, addressing both their obesity and GORD. Use of patient written notes to obtain BMI data for the 188 patients with missing BMI data and further analysis to determine outcomes following fundoplication in all patients, assessing for incidence of recurrent disease, will be undertaken to strengthen conclusions.Keywords: bariatric surgery, GORD, Nissen fundoplication, nice guidelines
Procedia PDF Downloads 60493 White Individuals' Perception On Whiteness
Authors: Sebastian Del Corral Winder, Kiriana Sanchez, Mixalis Poulakis, Samantha Gray
Abstract:
This paper seeks to explore White privilege and Whiteness. Being White in the U.S. is often perceived as the norm and it brings significant social, economic, educational, and health privileges that often are hidden in social interactions. One quality of Whiteness has been its invisibility given its intrinsic impact on the system, which becomes only visible when paying close attention to White identity and culture and during cross-cultural interactions. The cross-cultural interaction provides an emphasis on differences between the participants and people of color are often viewed as “the other.” These interactions may promote an increased opportunity for discrimination and negative stereotypes against a person of color. Given the recent increase of violence against culturally diverse groups, there has been an increased sense of otherness and division in the country. Furthermore, the accent prestige theory has found that individuals who speak English with a foreign accent are perceived as less educated, competent, friendly, and trustworthy by White individuals in the United States. Using the consensual qualitative research (CQR) methodology, this study explored the cross-cultural dyad from the White individual’s perspective focusing on the psychotherapeutic relationship. The participants were presented with an audio recording of a conversation between a psychotherapist with a Hispanic accent and a patient with an American English accent. Then, the participants completed an interview regarding their perceptions of race, culture, and cross-cultural interactions. The preliminary results suggested that the Hispanic accent alone was enough for the participants to assign stereotypical ethnic and cultural characteristics to the individual with the Hispanic accent. Given the quality of the responses, the authors completed a secondary analysis to explore Whiteness and White privilege in more depth. Participants were found to be on a continuum in their understanding and acknowledgment of systemic racism; while some participants listed examples of inequality, other participants noted: “all people are treated equally.” Most participants noted their feelings of discomfort in discussing topics of cultural diversity and systemic racism by fearing to “say the ‘wrong thing.” Most participants placed the responsibility of discussing cultural differences with the person of color, which has been observed to create further alienation and otherness for culturally diverse individuals. The results indicate the importance of examining racial and cultural biases from White individuals to promote an anti-racist stance. The results emphasize the need for greater systemic changes in education, policies, and individual awareness regarding cultural identity. The results suggest the importance for White individuals to take ownership of their own cultural biases in order to promote equity and engage in cultural humility in a multicultural world. Future research should continue exploring the role of White ethnic identity and education as they appear to moderate White individuals’ attitudes and beliefs regarding other races and cultures.Keywords: culture, qualitative research, whiteness, white privilege
Procedia PDF Downloads 158492 Official Game Account Analysis: Factors Influence Users' Judgments in Limited-Word Posts
Authors: Shanhua Hu
Abstract:
Social media as a critical propagandizing form of film, video games, and digital products has received substantial research attention, but there exists several critical barriers such as: (1) few studies exploring the internal and external connections of a product as part of the multimodal context that gives rise to readability and commercial return; (2) the lack of study of multimodal analysis in product’s official account of game publishers and its impact on users’ behaviors including purchase intention, social media engagement, and playing time; (3) no standardized ecologically-valid, game type-varying data can be used to study the complexity of official account’s postings within a time period. This proposed research helps to tackle these limitations in order to develop a model of readability study that is more ecologically valid, robust, and thorough. To accomplish this objective, this paper provides a more diverse dataset comprising different visual elements and messages collected from the official Twitter accounts of the Top 20 best-selling games of 2021. Video game companies target potential users through social media, a popular approach is to set up an official account to maintain exposure. Typically, major game publishers would create an official account on Twitter months before the game's release date to update on the game's development, announce collaborations, and reveal spoilers. Analyses of tweets from those official Twitter accounts would assist publishers and marketers in identifying how to efficiently and precisely deploy advertising to increase game sales. The purpose of this research is to determine how official game accounts use Twitter to attract new customers, specifically which types of messages are most effective at increasing sales. The dataset includes the number of days until the actual release date on Twitter posts, the readability of the post (Flesch Reading Ease Score, FRES), the number of emojis used, the number of hashtags, the number of followers of the mentioned users, the categorization of the posts (i.e., spoilers, collaborations, promotions), and the number of video views. The timeline of Twitter postings from official accounts will be compared to the history of pre-orders and sales figures to determine the potential impact of social media posts. This study aims to determine how the above-mentioned characteristics of official accounts' Twitter postings influence the sales of the game and to examine the possible causes of this influence. The outcome will provide researchers with a list of potential aspects that could influence people's judgments in limited-word posts. With the increased average online time, users would adapt more quickly than before in online information exchange and readings, such as the word to use sentence length, and the use of emojis or hashtags. The study on the promotion of official game accounts will not only enable publishers to create more effective promotion techniques in the future but also provide ideas for future research on the influence of social media posts with a limited number of words on consumers' purchasing decisions. Future research can focus on more specific linguistic aspects, such as precise word choice in advertising.Keywords: engagement, official account, promotion, twitter, video game
Procedia PDF Downloads 76491 Cytotoxicity and Genotoxicity of Glyphosate and Its Two Impurities in Human Peripheral Blood Mononuclear Cells
Authors: Marta Kwiatkowska, Paweł Jarosiewicz, Bożena Bukowska
Abstract:
Glyphosate (N-phosphonomethylglycine) is a non-selected broad spectrum ingredient in the herbicide (Roundup) used for over 35 years for the protection of agricultural and horticultural crops. Glyphosate was believed to be environmentally friendly but recently, a large body of evidence has revealed that glyphosate can negatively affect on environment and humans. It has been found that glyphosate is present in the soil and groundwater. It can also enter human body which results in its occurrence in blood in low concentrations of 73.6 ± 28.2 ng/ml. Research conducted for potential genotoxicity and cytotoxicity can be an important element in determining the toxic effect of glyphosate. Due to regulation of European Parliament 1107/2009 it is important to assess genotoxicity and cytotoxicity not only for the parent substance but also its impurities, which are formed at different stages of production of major substance – glyphosate. Moreover verifying, which of these compounds are more toxic is required. Understanding of the molecular pathways of action is extremely important in the context of the environmental risk assessment. In 2002, the European Union has decided that glyphosate is not genotoxic. Unfortunately, recently performed studies around the world achieved results which contest decision taken by the committee of the European Union. World Health Organization (WHO) in March 2015 has decided to change the classification of glyphosate to category 2A, which means that the compound is considered to "probably carcinogenic to humans". This category relates to compounds for which there is limited evidence of carcinogenicity to humans and sufficient evidence of carcinogenicity on experimental animals. That is why we have investigated genotoxicity and cytotoxicity effects of the most commonly used pesticide: glyphosate and its impurities: N-(phosphonomethyl)iminodiacetic acid (PMIDA) and bis-(phosphonomethyl)amine on human peripheral blood mononuclear cells (PBMCs), mostly lymphocytes. DNA damage (analysis of DNA strand-breaks) using the single cell gel electrophoresis (comet assay) and ATP level were assessed. Cells were incubated with glyphosate and its impurities: PMIDA and bis-(phosphonomethyl)amine at concentrations from 0.01 to 10 mM for 24 hours. Evaluating genotoxicity using the comet assay showed a concentration-dependent increase in DNA damage for all compounds studied. ATP level was decreased to zero as a result of using the highest concentration of two investigated impurities, like bis-(phosphonomethyl)amine and PMIDA. Changes were observed using the highest concentration at which a person can be exposed as a result of acute intoxication. Our survey leads to a conclusion that the investigated compounds exhibited genotoxic and cytotoxic potential but only in high concentrations, to which people are not exposed environmentally. Acknowledgments: This work was supported by the Polish National Science Centre (Contract-2013/11/N/NZ7/00371), MSc Marta Kwiatkowska, project manager.Keywords: cell viability, DNA damage, glyphosate, impurities, peripheral blood mononuclear cells
Procedia PDF Downloads 482490 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System
Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii
Abstract:
Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression
Procedia PDF Downloads 159489 A Critical Analysis of the Current Concept of Healthy Eating and Its Impact on Food Traditions
Authors: Carolina Gheller Miguens
Abstract:
Feeding is, and should be, pleasurable for living beings so they desire to nourish themselves while preserving the continuity of the species. Social rites usually revolve around the table and are closely linked to the cultural traditions of each region and social group. Since the beginning, food has been closely linked with the products each region provides, and, also, related to the respective seasons of production. With the globalization and facilities of modern life we are able to find an ever increasing variety of products at any time of the year on supermarket shelves. These lifestyle changes end up directly influencing food traditions. With the era of uncontrolled obesity caused by the dazzle with the large and varied supply of low-priced to ultra-processed industrial products now in the past, today we are living a time when people are putting aside the pleasure of eating to exclusively eat food dictated by the media as healthy. Recently the medicalization of food in our society has become so present in daily life that almost without realizing we make food choices conditioned to the studies of the properties of these foods. The fact that people are more attentive to their health is interesting. However, when this care becomes an obsessive disorder, which imposes itself on the pleasure of eating and extinguishes traditional customs, it becomes dangerous for our recognition as citizens belonging to a culture and society. This new way of living generates a rupture with the social environment of origin, possibly exposing old traditions to oblivion after two or three generations. Based on these facts, the presented study analyzes these social transformations that occur in our society that triggered the current medicalization of food. In order to clarify what is actually a healthy diet, this research proposes a critical analysis on the subject aiming to understand nutritional rationality and relate how it acts in the medicalization of food. A wide bibliographic review on the subject was carried out followed by an exploratory research in online (especially social) media, a relevant source in this context due to the perceived influence of such media in contemporary eating habits. Finally, this data was crossed, critically analyzing the current situation of the concept of healthy eating and medicalization of food. Throughout this research, it was noticed that people are increasingly seeking information about the nutritional properties of food, but instead of seeking the benefits of products that traditionally eat in their social environment, they incorporate external elements that often bring benefits similar to the food already consumed. This is because the access to information is directed by the media and exalts the exotic, since this arouses more interest of the population in general. Efforts must be made to clarify that traditional products are also healthy foods, rich in history, memory and tradition and cannot be replaced by a standardized diet little concerned with the construction of taste and pleasure, having a relationship with food as if it were a Medicinal product.Keywords: food traditions, food transformations, healthy eating, medicalization of food
Procedia PDF Downloads 329488 Flexural Response of Sandwiches with Micro Lattice Cores Manufactured via Selective Laser Sintering
Authors: Emre Kara, Ali Kurşun, Halil Aykul
Abstract:
The lightweight sandwiches obtained with the use of various core materials such as foams, honeycomb, lattice structures etc., which have high energy absorbing capacity and high strength to weight ratio, are suitable for several applications in transport industry (automotive, aerospace, shipbuilding industry) where saving of fuel consumption, load carrying capacity increase, safety of vehicles and decrease of emission of harmful gases are very important aspects. While the sandwich structures with foams and honeycombs have been applied for many years, there is a growing interest on a new generation sandwiches with micro lattice cores. In order to produce these core structures, various production methods were created with the development of the technology. One of these production technologies is an additive manufacturing technique called selective laser sintering/melting (SLS/SLM) which is very popular nowadays because of saving of production time and achieving the production of complex topologies. The static bending and the dynamic low velocity impact tests of the sandwiches with carbon fiber/epoxy skins and the micro lattice cores produced via SLS/SLM were already reported in just a few studies. The goal of this investigation was the analysis of the flexural response of the sandwiches consisting of glass fiber reinforced plastic (GFRP) skins and the micro lattice cores manufactured via SLS under thermo-mechanical loads in order to compare the results in terms of peak load and absorbed energy values respect to the effect of core cell size, temperature and support span length. The micro lattice cores were manufactured using SLS technology that creates the product drawn by a 3D computer aided design (CAD) software. The lattice cores which were designed as body centered cubic (BCC) model having two different cell sizes (d= 2 and 2.5 mm) with the strut diameter of 0.3 mm were produced using titanium alloy (Ti6Al4V) powder. During the production of all the core materials, the same production parameters such as laser power, laser beam diameter, building direction etc. were kept constant. Vacuum Infusion (VI) method was used to produce skin materials, made of [0°/90°] woven S-Glass prepreg laminates. The combination of the core and skins were implemented under VI. Three point bending tests were carried out by a servo-hydraulic test machine with different values of support span distances (L = 30, 45, and 60 mm) under various temperature values (T = 23, 40 and 60 °C) in order to analyze the influences of support span and temperature values. The failure mode of the collapsed sandwiches has been investigated using 3D computed tomography (CT) that allows a three-dimensional reconstruction of the analyzed object. The main results of the bending tests are: load-deflection curves, peak force and absorbed energy values. The results were compared according to the effect of cell size, support span and temperature values. The obtained results have particular importance for applications that require lightweight structures with a high capacity of energy dissipation, such as the transport industry, where problems of collision and crash have increased in the last years.Keywords: light-weight sandwich structures, micro lattice cores, selective laser sintering, transport application
Procedia PDF Downloads 340487 Aerosol Chemical Composition in Urban Sites: A Comparative Study of Lima and Medellin
Authors: Guilherme M. Pereira, Kimmo Teinïla, Danilo Custódio, Risto Hillamo, Célia Alves, Pérola de C. Vasconcellos
Abstract:
South American large cities often present serious air pollution problems and their atmosphere composition is influenced by a variety of emissions sources. The South American Emissions Megacities, and Climate project (SAEMC) has focused on the study of emissions and its influence on climate in the South American largest cities and it also included Lima (Peru) and Medellin (Colombia), sites where few studies of the genre were done. Lima is a coastal city with more than 8 million inhabitants and the second largest city in South America. Medellin is a 2.5 million inhabitants city and second largest city in Colombia; it is situated in a valley. The samples were collected in quartz fiber filters in high volume samplers (Hi-Vol), in 24 hours of sampling. The samples were collected in intensive campaigns in both sites, in July, 2010. Several species were determined in the aerosol samples of Lima and Medellin. Organic and elemental carbon (OC and EC) in thermal-optical analysis; biomass burning tracers (levoglucosan - Lev, mannosan - Man and galactosan - Gal) in high-performance anion exchange ion chromatography with mass spectrometer detection; water soluble ions in ion chromatography. The average particulate matter was similar for both campaigns, the PM10 concentrations were above the recommended by World Health Organization (50 µg m⁻³ – daily limit) in 40% of the samples in Medellin, while in Lima it was above that value in 15% of the samples. The average total ions concentration was higher in Lima (17450 ng m⁻³ in Lima and 3816 ng m⁻³ in Medellin) and the average concentrations of sodium and chloride were higher in this site, these species also had better correlations (Pearson’s coefficient = 0,63); suggesting a higher influence of marine aerosol in the site due its location in the coast. Sulphate concentrations were also much higher at Lima site; which may be explained by a higher influence of marine originated sulphate. However, the OC, EC and monosaccharides average concentrations were higher at Medellin site; this may be due to the lower dispersion of pollutants due to the site’s location and a larger influence of biomass burning sources. The levoglucosan average concentration was 95 ng m⁻³ for Medellin and 16 ng m⁻³ and OC was well correlated with levoglucosan (Pearson’s coefficient = 0,86) in Medellin; suggesting a higher influence of biomass burning over the organic aerosol in this site. The Lev/Man ratio is often related to the type of biomass burned and was close to 18, similar to the observed in previous studies done at biomass burning impacted sites in the Amazon region; backward trajectories also suggested the transport of aerosol from that region. Biomass burning appears to have a larger influence on the air quality in Medellin, in addition the vehicular emissions; while Lima showed a larger influence of marine aerosol during the study period.Keywords: aerosol transport, atmospheric particulate matter, biomass burning, SAEMC project
Procedia PDF Downloads 263486 Big Data Applications for Transportation Planning
Authors: Antonella Falanga, Armando Cartenì
Abstract:
"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning
Procedia PDF Downloads 60485 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂
Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine
Abstract:
Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).Keywords: devulcanization, recycling, rubber, waste
Procedia PDF Downloads 385484 Female Entrepreneurship in the Creative Industry: The Antecedents of Their Ventures' Performance
Authors: Naoum Mylonas, Eugenia Petridou
Abstract:
Objectives: The objectives of this research are firstly, to develop an integrated model of predicting factors to new ventures performance, taking into account certain issues and specificities related to creative industry and female entrepreneurship based on the prior research; secondly, to determine the appropriate measures of venture performance in a creative industry context, drawing upon previous surveys; thirdly, to illustrate the importance of entrepreneurial orientation, networking ties, environment dynamism and access to financial capital on new ventures performance. Prior Work: An extant review of the creative industry literature highlights the special nature of entrepreneurship in this field. Entrepreneurs in creative industry share certain specific characteristics and intensions, such as to produce something aesthetic, to enrich their talents and their creativity, and to combine their entrepreneurial with their artistic orientation. Thus, assessing venture performance and success in creative industry entails an examination of how creative people or artists conceptualize success. Moreover, female entrepreneurs manifest more positive attitudes towards sectors primarily based on creativity, rather than innovation in which males outbalance. As creative industry entrepreneurship based mainly on the creative personality of the creator / artist, a high interest is accrued to examine female entrepreneurship in the creative industry. Hypotheses development: H1a: Female entrepreneurs who are more entrepreneurially-oriented show a higher financial performance. H1b: Female entrepreneurs who are more artistically-oriented show a higher creative performance. H2: Female entrepreneurs who have personality that is more creative perform better. H3: Female entrepreneurs who participate in or belong to networks perform better. H4: Female entrepreneurs who have been consulted by a mentor perform better. Η5a: Female entrepreneurs who are motivated more by pull-factors perform better. H5b: Female entrepreneurs who are motivated more by push-factors perform worse. Approach: A mixed method triangulation design has been adopted for the collection and analysis of data. The data are collected through a structured questionnaire for the quantitative part and through semi-structured interviews for the qualitative part as well. The sample is 293 Greek female entrepreneurs in the creative industry. Main findings: All research hypotheses are accepted. The majority of creative industry entrepreneurs evaluate themselves in creative performance terms rather than financial ones. The individuals who are closely related to traditional arts sectors have no EO but also evaluate themselves highly in terms of venture performance. Creative personality of creators is appeared as the most important predictor of venture performance. Pull factors in accordance with our hypothesis lead to higher levels of performance compared to push factors. Networking and mentoring are viewed as very important, particularly now during the turbulent economic environment in Greece. Implications-Value: Our research provides an integrated model with several moderating variables to predict ventures performance in the creative industry, taking also into account the complicated nature of arts and the way artists and creators define success. At the end, the findings may be used for the appropriate design of educational programs in creative industry entrepreneurship. This research has been co-financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Heracleitus II. Investing in knowledge society through the European Social Fund.Keywords: venture performance, female entrepreneurship, creative industry, networks
Procedia PDF Downloads 262483 The Relationship between Wasting and Stunting in Young Children: A Systematic Review
Authors: Susan Thurstans, Natalie Sessions, Carmel Dolan, Kate Sadler, Bernardette Cichon, Shelia Isanaka, Dominique Roberfroid, Heather Stobagh, Patrick Webb, Tanya Khara
Abstract:
For many years, wasting and stunting have been viewed as separate conditions without clear evidence supporting this distinction. In 2014, the Emergency Nutrition Network (ENN) examined the relationship between wasting and stunting and published a report highlighting the evidence for linkages between the two forms of undernutrition. This systematic review aimed to update the evidence generated since this 2014 report to better understand the implications for improving child nutrition, health and survival. Following PRISMA guidelines, this review was conducted using search terms to describe the relationship between wasting and stunting. Studies related to children under five from low- and middle-income countries that assessed both ponderal growth/wasting and linear growth/stunting, as well as the association between the two, were included. Risk of bias was assessed in all included studies using SIGN checklists. 45 studies met the inclusion criteria- 39 peer reviewed studies, 1 manual chapter, 3 pre-print publications and 2 published reports. The review found that there is a strong association between the two conditions whereby episodes of wasting contribute to stunting and, to a lesser extent, stunting leads to wasting. Possible interconnected physiological processes and common risk factors drive an accumulation of vulnerabilities. Peak incidence of both wasting and stunting was found to be between birth and three months. A significant proportion of children experience concurrent wasting and stunting- Country level data suggests that up to 8% of children under 5 may be both wasted and stunted at the same time, global estimates translate to around 16 million children. Children with concurrent wasting and stunting have an elevated risk of mortality when compared to children with one deficit alone. These children should therefore be considered a high-risk group in the targeting of treatment. Wasting, stunting and concurrent wasting and stunting appear to be more prevalent in boys than girls and it appears that concurrent wasting and stunting peaks between 12- 30 months of age with younger children being the most affected. Seasonal patterns in prevalence of both wasting and stunting are seen in longitudinal and cross sectional data and in particular season of birth has been shown to have an impact on a child’s subsequent experience of wasting and stunting. Evidence suggests that the use of mid-upper-arm circumference combined with weight-for-age Z-score might effectively identify children most at risk of near-term mortality, including those concurrently wasted and stunted. Wasting and stunting frequently occur in the same child, either simultaneously or at different moments through their life course. Evidence suggests there is a process of accumulation of nutritional deficits and therefore risk over the life course of a child demonstrates the need for a more integrated approach to prevention and treatment strategies to interrupt this process. To achieve this, undernutrition policies, programmes, financing and research must become more unified.Keywords: Concurrent wasting and stunting, Review, Risk factors, Undernutrition
Procedia PDF Downloads 127482 Physiological Effects during Aerobatic Flights on Science Astronaut Candidates
Authors: Pedro Llanos, Diego García
Abstract:
Spaceflight is considered the last frontier in terms of science, technology, and engineering. But it is also the next frontier in terms of human physiology and performance. After more than 200,000 years humans have evolved under earth’s gravity and atmospheric conditions, spaceflight poses environmental stresses for which human physiology is not adapted. Hypoxia, accelerations, and radiation are among such stressors, our research involves suborbital flights aiming to develop effective countermeasures in order to assure sustainable human space presence. The physiologic baseline of spaceflight participants is subject to great variability driven by age, gender, fitness, and metabolic reserve. The objective of the present study is to characterize different physiologic variables in a population of STEM practitioners during an aerobatic flight. Cardiovascular and pulmonary responses were determined in Science Astronaut Candidates (SACs) during unusual attitude aerobatic flight indoctrination. Physiologic data recordings from 20 subjects participating in high-G flight training were analyzed. These recordings were registered by wearable sensor-vest that monitored electrocardiographic tracings (ECGs), signs of dysrhythmias or other electric disturbances during all the flight. The same cardiovascular parameters were also collected approximately 10 min pre-flight, during each high-G/unusual attitude maneuver and 10 min after the flights. The ratio (pre-flight/in-flight/post-flight) of the cardiovascular responses was calculated for comparison of inter-individual differences. The resulting tracings depicting the cardiovascular responses of the subjects were compared against the G-loads (Gs) during the aerobatic flights to analyze cardiovascular variability aspects and fluid/pressure shifts due to the high Gs. In-flight ECG revealed cardiac variability patterns associated with rapid Gs onset in terms of reduced heart rate (HR) and some scattered dysrhythmic patterns (15% premature ventricular contractions-type) that were considered as triggered physiological responses to high-G/unusual attitude training and some were considered as instrument artifact. Variation events were observed in subjects during the +Gz and –Gz maneuvers and these may be due to preload and afterload, sudden shift. Our data reveal that aerobatic flight influenced the breathing rate of the subject, due in part by the various levels of energy expenditure due to the increased use of muscle work during these aerobatic maneuvers. Noteworthy was the high heterogeneity in the different physiological responses among a relatively small group of SACs exposed to similar aerobatic flights with similar Gs exposures. The cardiovascular responses clearly demonstrated that SACs were subjected to significant flight stress. Routine ECG monitoring during high-G/unusual attitude flight training is recommended to capture pathology underlying dangerous dysrhythmias in suborbital flight safety. More research is currently being conducted to further facilitate the development of robust medical screening, medical risk assessment approaches, and suborbital flight training in the context of the evolving commercial human suborbital spaceflight industry. A more mature and integrative medical assessment method is required to understand the physiology state and response variability among highly diverse populations of prospective suborbital flight participants.Keywords: g force, aerobatic maneuvers, suborbital flight, hypoxia, commercial astronauts
Procedia PDF Downloads 129