Search results for: human beta defensin 1
168 A Longitudinal Exploration into Computer-Mediated Communication Use (CMC) and Relationship Change between 2005-2018
Authors: Laurie Dempsey
Abstract:
Relationships are considered to be beneficial for emotional wellbeing, happiness and physical health. However, they are also complicated: individuals engage in a multitude of complex and volatile relationships during their lifetime, where the change to or ending of these dynamics can be deeply disruptive. As the internet is further integrated into everyday life and relationships are increasingly mediated, Media Studies’ and Sociology’s research interests intersect and converge. This study longitudinally explores how relationship change over time corresponds with the developing UK technological landscape between 2005-2018. Since the early 2000s, the use of computer-mediated communication (CMC) in the UK has dramatically reshaped interaction. Its use has compelled individuals to renegotiate how they consider their relationships: some argue it has allowed for vast networks to be accumulated and strengthened; others contend that it has eradicated the core values and norms associated with communication, damaging relationships. This research collaborated with UK media regulator Ofcom, utilising the longitudinal dataset from their Adult Media Lives study to explore how relationships and CMC use developed over time. This is a unique qualitative dataset covering 2005-2018, where the same 18 participants partook in annual in-home filmed depth interviews. The interviews’ raw video footage was examined year-on-year to consider how the same people changed their reported behaviour and outlooks towards their relationships, and how this coincided with CMC featuring more prominently in their everyday lives. Each interview was transcribed, thematically analysed and coded using NVivo 11 software. This study allowed for a comprehensive exploration into these individuals’ changing relationships over time, as participants grew older, experienced marriages or divorces, conceived and raised children, or lost loved ones. It found that as technology developed between 2005-2018, everyday CMC use was increasingly normalised and incorporated into relationship maintenance. It played a crucial role in altering relationship dynamics, even factoring in the breakdown of several ties. Three key relationships were identified as being shaped by CMC use: parent-child; extended family; and friendships. Over the years there were substantial instances of relationship conflict: for parents renegotiating their dynamic with their child as they tried to both restrict and encourage their child’s technology use; for estranged family members ‘forced’ together in the online sphere; and for friendships compelled to publicly display their relationship on social media, for fear of social exclusion. However, it was also evident that CMC acted as a crucial lifeline for these participants, providing opportunities to strengthen and maintain their bonds via previously unachievable means, both over time and distance. A longitudinal study of this length and nature utilising the same participants does not currently exist, thus provides crucial insight into how and why relationship dynamics alter over time. This unique and topical piece of research draws together Sociology and Media Studies, illustrating how the UK’s changing technological landscape can reshape one of the most basic human compulsions. This collaboration with Ofcom allows for insight that can be utilised in both academia and policymaking alike, making this research relevant and impactful across a range of academic fields and industries.Keywords: computer mediated communication, longitudinal research, personal relationships, qualitative data
Procedia PDF Downloads 121167 Determination of the Presence of Antibiotic Resistance from Vibrio Species in Northern Italy
Authors: Tramuta Clara, Masotti Chiara, Pitti Monica, Adriano Daniela, Battistini Roberta, Serraca Laura, Decastelli Lucia
Abstract:
Oysters are considered filter organisms, and their raw consumption may increase health risks for consumers: it is often associated with outbreaks of gastroenteritis or enteric illnesses. Most of these foodborne diseases are caused by Vibrio strains, enteric pathogens also involved in the diffusion of genetic determinants of antibiotic resistance and their entrance along the food chain. The European Food Safety Authority (EFSA), during the European Union report on antimicrobial resistance in 2017, focused the attention about the role of food as a possible carrier of antibiotic-resistant bacteria or antibiotic-resistance genes that determine health risks for humans. This study wants to determine antibiotic resistance and antibiotic-resistance genes in Vibrio spp. isolated from Crassostrea gigas oysters collected in the Golfo della Spezia (Liguria, Italy). A total of 47 Vibrio spp. strains were isolated (ISO21872-2:2017) during the summer of 2021 from oysters of Crassostrea gigas. The strains were identified by MALDI-TOF (Bruker, Germany) mass spectrometry and tested for antibiotic susceptibility using a broth microdiluition method (ISO20776-1:2019) using Sensititre EUVSEC plates (Thermo-Fisher Scientific) to obtain the Minimum Inhibitory Concentration (MIC). The strains were tested with PCR-based biomolecular methods, according to previous works, to define the presence of 23 resistance genes of the main classes of antibiotics used in human and veterinary medicine: tet (B), tet (C), tet (D), tet (A), tet (E), tet (G ), tet (K), tet (L), tet (M), tet (O), tet (S) (tetracycline resistance); blaCTX-M, blaTEM, blaOXA, blaSHV (β-lactam resistance); mcr-1 and mcr-2 (colistin resistance); qnrA, qnrB, and qnrS (quinolone resistance); sul1, sul2 and sul3 (sulfonamide resistance). Six different species have been identified: V. alginolyticus 34% (n=16), V. harveyi 28% (n=13), V. fortis 15% (n=7), V. pelagius 8% (n=4), V. parahaemolyticus 11% (n=5) e V. chagasii 4% (n=2). The PCR assays showed the presence of the blaTEM gene on 40% of the strains (n=19). All the other genes were not detected, except for a V. alginolyticus positive for anrS gene. The broth microdiluition method results showed an high level of resistance for ciprofloxacin (62%; n=29), ampicillin (47%; n=22), and colistin (49%; n=23). Furthermore, 32% (n=15) of strains can be considered multiresistant bacteria for the simultaneous presence of resistance for three different antibiotic classes. Susceptibility towards meropenem, azithromycin, gentamicin, ceftazidime, cefotaxime, chloramphenicol, tetracycline and sulphamethoxazole reached 100%. The Vibrio species identified in this study are widespread in marine environments and can cause gastrointerstinal infections after the ingestion of raw fish products and bivalve molluscs. The level of resistance to antibiotics such as ampicillin, ciprofloxacin and colistin can be connected to anthropic factors (industrial, agricultural and domestic wastes) that promote the spread of resistance to these antibiotics. It can be also observed a strong correlation between phenotypic (resistant MIC) and genotypic (positive blaTEM gene) resistance for ampicillin on the same strains, probably due to the transfer of genetic material between bacterial strains. Consumption of raw bivalve molluscs can represent a risk for consumers heath due to the potentially presence of foodborne pathogens, highly resistant to different antibiotics and source of transferable antibiotic-resistant genes.Keywords: vibrio species, blaTEM genes, antimicrobial resistance, PCR
Procedia PDF Downloads 76166 Influence of Cryo-Grinding on Antioxidant Activity and Amount of Free Phenolic Acids, Rutin and Tyrosol in Whole Grain Buckwheat and Pumpkin Seed Cake
Authors: B. Voucko, M. Benkovic, N. Cukelj, S. Drakula, D. Novotni, S. Balbino, D. Curic
Abstract:
Oxidative stress is considered as one of the causes leading to metabolic disorders in humans. Therefore, the ability of antioxidants to inhibit free radical production is their primary role in the human organism. Antioxidants originating from cereals, especially flavonoids and polyphenols, are mostly bound and indigestible. Micronization damages the cell wall which consecutively results in bioactive material to be more accessible in vivo. In order to ensure complete fragmentation, micronization is often combined with high temperatures (e.g., for bran 200°C) which can lead to degradation of bioactive compounds. The innovative non-thermal technology of cryo-milling is an ultra-fine micronization method that uses liquid nitrogen (LN2) at a temperature of 195°C to freeze and cool the sample during milling. Freezing at such low temperatures causes the material to become brittle which ensures the generation of fine particles while preserving the bioactive content of the material. The aim of this research was to determine if production of ultra-fine material with cryo-milling will result in the augmentation of available bioactive compounds of buckwheat and pumpkin seed cake. For that reason, buckwheat and pumpkin seed cake were ground in a ball mill (CryoMill, Retch, Germany) with and without the use of LN2 for 8 minutes, in a 50 mL stainless steel jar containing one grinding ball (Ø 25 mm) at an oscillation frequency of 30 Hz. The cryo-milled samples were cooled with LN2 for 2 minutes prior to milling, followed by the first cycle of milling (4 minutes), intermediary cooling (2 minutes), and finally the second cycle of milling (further 4 minutes). A continuous process of milling was applied to the samples ground without freezing with LN2. Particle size distribution was determined using the Scirocco 2000 dry dispersion unit (Malvern Instruments, UK). Antioxidant activity was determined by 2,2-Diphenyl-1-picrylhydrazyl (DPPH) test and ferric reducing antioxidant power (FRAP) assay, while the total phenol content was determined using the Folin Ciocalteu method, using the ultraviolet-visible spectrophotometer (Specord 50 Plus, Germany). The content of the free phenolic acids, rutin in buckwheat, tyrosol in pumpkin seed cake, was determined with an HPLC-PDA method (Agilent 1200 series, Germany). Cryo-milling resulted in 11 times smaller size of buckwheat particles, and 3 times smaller size of pumpkin seed particles than milling without the use of LN2, but also, a lower uniformity of the particle size distribution. Lack of freezing during milling of pumpkin seed cake caused a formation of agglomerates due to its high-fat content (21 %). Cryo-milling caused augmentation of buckwheat flour antioxidant activity measured by DPPH test (23,9%) and an increase in available rutin content (14,5%). Also, it resulted in an augmentation of the total phenol content (36,9%) and available tyrosol content (12,5%) of pumpkin seed cake. Antioxidant activity measured with the FRAP test, as well as the content of phenolic acids remained unchanged independent of the milling process. The results of this study showed the potential of cryo-milling for complete raw material utilization in the food industry, as well as a tool for extraction of aimed bioactive components.Keywords: bioactive, ball-mill, buckwheat, cryo-milling, pumpkin seed cake
Procedia PDF Downloads 132165 Socio-Economic Determinants of Physical Activity of Non-Manual Workers, Including the Early Senior Group, from the City of Wroclaw in Poland
Authors: Daniel Puciato, Piotr Oleśniewicz, Julita Markiewicz-Patkowska, Krzysztof Widawski, Michał Rozpara, Władysław Mynarski, Agnieszka Gawlik, Małgorzata Dębska, Soňa Jandová
Abstract:
Physical activity as a part of people’s everyday life reduces the risk of many diseases, including those induced by lifestyle, e.g. obesity, type 2 diabetes, osteoporosis, coronary heart disease, degenerative arthritis, and certain types of cancer. That refers particularly to professionally active people, including the early senior group working on non-manual positions. The aim of the study is to evaluate the relationship between physical activity and the socio-economic status of non-manual workers from Wroclaw—one of the biggest cities in Poland, a model setting for such investigations in this part of Europe. The crucial problem in the research is to find out the percentage of respondents who meet the health-related recommendations of the World Health Organization (WHO) concerning the volume, frequency, and intensity of physical activity, as well as to establish if the most important socio-economic factors, such as gender, age, education, marital status, per capita income, savings and debt, determine the compliance with the WHO physical activity recommendations. During the research, conducted in 2013, 1,170 people (611 women and 559 men) aged 21–60 years were examined. A diagnostic poll method was applied to collect the data. Physical activity was measured with the use of the short form of the International Physical Activity Questionnaire with extended socio-demographic questions, i.e. concerning gender, age, education, marital status, income, savings or debts. To evaluate the relationship between physical activity and selected socio-economic factors, logistic regression was used (odds ratio statistics). Statistical inference was conducted on the adopted ex ante probability level of p<0.05. The majority of respondents met the volume of physical effort recommended for health benefits. It was particularly noticeable in the case of the examined men. The probability of compliance with the WHO physical activity recommendations was highest for workers aged 21–30 years with secondary or higher education who were single, received highest incomes and had savings. The results indicate the relations between physical activity and socio-economic status in the examined women and men. People with lower socio-economic status (e.g. manual workers) are physically active primarily at work, whereas those better educated and wealthier implement physical effort primarily in their leisure time. Among the investigated subjects, the youngest group of non-manual workers have the best chances to meet the WHO standards of physical activity. The study also confirms that secondary education has a positive effect on the public awareness on the role of physical activity in human life. In general, the analysis of the research indicates that there is a relationship between physical activity and some socio-economic factors of the respondents, such as gender, age, education, marital status, income per capita, and the possession of savings. Although the obtained results cannot be applied for the general population, they show some important trends that will be verified in subsequent studies conducted by the authors of the paper.Keywords: IPAQ, nonmanual workers, physical activity, socioeconomic factors, WHO
Procedia PDF Downloads 535164 Improved Anatomy Teaching by the 3D Slicer Platform
Authors: Ahmedou Moulaye Idriss, Yahya Tfeil
Abstract:
Medical imaging technology has become an indispensable tool in many branches of the biomedical, health area, and research and is vitally important for the training of professionals in these fields. It is not only about the tools, technologies, and knowledge provided but also about the community that this training project proposes. In order to be able to raise the level of anatomy teaching in the medical school of Nouakchott in Mauritania, it is necessary and even urgent to facilitate access to modern technology for African countries. The role of technology as a key driver of justifiable development has long been recognized. Anatomy is an essential discipline for the training of medical students; it is a key element for the training of medical specialists. The quality and results of the work of a young surgeon depend on his better knowledge of anatomical structures. The teaching of anatomy is difficult as the discipline is being neglected by medical students in many academic institutions. However, anatomy remains a vital part of any medical education program. When anatomy is presented in various planes medical students approve of difficulties in understanding. They do not increase their ability to visualize and mentally manipulate 3D structures. They are sometimes not able to correctly identify neighbouring or associated structures. This is the case when they have to make the identification of structures related to the caudate lobe when the liver is moved to different positions. In recent decades, some modern educational tools using digital sources tend to replace old methods. One of the main reasons for this change is the lack of cadavers in laboratories with poorly qualified staff. The emergence of increasingly sophisticated mathematical models, image processing, and visualization tools in biomedical imaging research have enabled sophisticated three-dimensional (3D) representations of anatomical structures. In this paper, we report our current experience in the Faculty of Medicine in Nouakchott Mauritania. One of our main aims is to create a local learning community in the fields of anatomy. The main technological platform used in this project is called 3D Slicer. 3D Slicer platform is an open-source application available for free for viewing, analysis, and interaction with biomedical imaging data. Using the 3D Slicer platform, we created from real medical images anatomical atlases of parts of the human body, including head, thorax, abdomen, liver, and pelvis, upper and lower limbs. Data were collected from several local hospitals and also from the website. We used MRI and CT-Scan imaging data from children and adults. Many different anatomy atlases exist, both in print and digital forms. Anatomy Atlas displays three-dimensional anatomical models, image cross-sections of labelled structures and source radiological imaging, and a text-based hierarchy of structures. Open and free online anatomical atlases developed by our anatomy laboratory team will be available to our students. This will allow pedagogical autonomy and remedy the shortcomings by responding more fully to the objectives of sustainable local development of quality education and good health at the national level. To make this work a reality, our team produced several atlases available in our faculty in the form of research projects.Keywords: anatomy, education, medical imaging, three dimensional
Procedia PDF Downloads 241163 Regulation of Cultural Relationship between Russia and Ukraine after Crimea’s Annexation: A Comparative Socio-Legal Study
Authors: Elena Sherstoboeva, Elena Karzanova
Abstract:
This paper explores the impact of the annexation of Crimea on the regulation of live performances and tour management of Russian pop music performers in Ukraine and of Ukrainian performers in Russia. Without a doubt, the cultural relationship between Russia and Ukraine is not limited to this issue. Yet concert markets tend to respond particularly rapidly to political, economic, and social changes, especially in Russia and Ukraine, where the high level of digital piracy means that the music businesses mainly depend upon income from performances rather than from digital rights sales. This paper argues that the rules formed in both countries after Russia’s annexation of Crimea in 2014 have contributed to the separation of a single cultural space that had existed in Soviet and Post-Soviet Russia and Ukraine before the annexation. These rules have also facilitated performers’ self-censorship and increased the politicisation of the music businesses in the two neighbouring countries. This study applies a comparative socio-legal approach to study Russian and Ukrainian live events and tour regulation. A qualitative analysis of Russian and Ukrainian national and intergovernmental legal frameworks is applied to examine formal regulations. Soviet and early post-Soviet laws and policies are also studied, but only to the extent that they help to track the changes in the Russian–Ukrainian cultural relationship. To identify and analyse the current informal rules, the study design includes in-depth semi-structured interviews with 30 live event or tour managers working in Russia and Ukraine. A case study is used to examine how the Eurovision Song Contest, an annual international competition, has played out within the Russian–Ukrainian conflict. The study suggests that modern Russian and Ukrainian frameworks for live events and tours have developed Soviet regulatory traditions when cultural policies served as a means of ideological control. At the same time, contemporary regulations mark a considerable perspective shift, as the previous rules have been aimed at maintaining close cultural connections between the Russian and Ukrainian nations. Instead of collaboration, their current frameworks mostly serve as forms of repression, implying that performers must choose only one national market in which to work. The regulatory instruments vary and often impose limitations that typically exist in non-democratic regimes to restrict foreign journalism, such as visa barriers or bans on entry. The more unexpected finding is that, in comparison with Russian law, Ukrainian regulations have created more obstacles to the organisation of live tours and performances by Russian artists in Ukraine. Yet this stems from commercial rather than political factors. This study predicts that the more economic challenges the Russian or Ukrainian music businesses face, the harsher the regulations will be regarding the organisation of live events or tours in the other country. This study recommends that international human rights organisations and non-governmental organisations develop and promote specific standards for artistic rights and freedoms, given the negative effects of the increasing politicisation of the entertainment business and cultural spheres to freedom of expression and cultural rights and pluralism.Keywords: annexation of Crimea, artistic freedom, censorship, cultural policy
Procedia PDF Downloads 118162 Adapting Hazard Analysis and Critical Control Points (HACCP) Principles to Continuing Professional Education
Authors: Yaroslav Pavlov
Abstract:
In the modern world, ensuring quality has become increasingly important in various fields of human activity. One universal approach to quality management, proven effective in the food industry, is the HACCP (Hazard Analysis and Critical Control Points) concept. Based on principles of preventing potential hazards to consumers at all stages of production, from raw materials to the final product, HACCP offers a systematic approach to identifying, assessing risks, and managing critical control points (CCPs). Initially used primarily for food production, it was later effectively adapted to the food service sector. Implementing HACCP provides organizations with a reliable foundation for improving food safety, covering all links in the food chain from producer to consumer, making it an integral part of modern quality management systems. The main principles of HACCP—hazard identification, CCP determination, effective monitoring procedures, corrective actions, regular checks, and documentation—are universal and can be adapted to other areas. The adaptation of the HACCP concept is relevant for continuing professional education (CPE) with certain reservations. Specifically, it is reasonable to abandon the term ‘hazards’ as deviations in CCPs do not pose dangers, unlike in food production. However, the approach through CCP analysis and the use of HACCP's main principles for educational services are promising. This is primarily because it allows for identifying key CCPs based on the value creation model of a specific educational organization and consequently focusing efforts on specific CCPs to manage the quality of educational services. This methodology can be called the Analysis of Critical Points in Educational Services (ACPES). ACPES offers a similar approach to managing the quality of educational services, focusing on preventing and eliminating potential risks that could negatively impact the educational process, learners' achievement of set educational goals, and ultimately lead to students rejecting the organization's educational services. ACPES adapts proven HACCP principles to educational services, enhancing quality management effectiveness and student satisfaction. ACPES includes identifying potential problems at all stages of the educational process, from initial interest to graduation and career development. In ACPES, the term "hazards" is replaced with "problematic areas," reflecting the specific nature of the educational environment. Special attention is paid to determining CCPs—stages where corrective measures can most effectively prevent or minimize the risk of failing educational goals. The ACPES principles align with HACCP's principles, adjusted for the specificities of CPE. The method of the learner's journey map (variation of Customer Journey Map, CJM) can be used to overcome the complexity of formalizing the production chain in educational services. CJM provides a comprehensive understanding of the learner's experience at each stage, facilitating targeted and effective quality management. Thus, integrating the learner's journey map into ACPES represents a significant extension of the methodology's capabilities, ensuring a comprehensive understanding of the educational process and forming an effective quality management system focused on meeting learners' needs and expectations.Keywords: quality management, continuing professional education, customer journey map, HACCP
Procedia PDF Downloads 37161 Children’s Experience of the Built Environment in the Initial Stages of a Settlement Formation: Case Study of Shahid-Keshvari New Settlement, Isfahan, Iran
Authors: Hassan Sheikh, Mehdi Nilipour, Amiraslan Fila
Abstract:
Many conventional town planning processes do little to give children and young people a voice on what is important about the urban environment. As a result of paying little attention to the children, their physical, social and mental needs are hardly met in urban environments. Therefore, urban spaces are impotent to attract children, while their recreational space has been confined to home or virtual spaces. Since children are just taking the first steps to learn the world beyond house borders, their living environment will profoundly influence almost all aspects of their lives. This puts a great deal of responsibility on the shoulders of planners, who need to balance a number of different issues in urban design to make places more child-friendly. The main purpose of present research is to analyze and plan a child-friendly environment in an on-going urban settlement development for the benefit of all residents. Assessing children’s needs and regard them in development strategies and policies will help to “plan for children”. Following this purpose, based on child-friendly environment studies, indicators of child-friendly environments were collected. Then three distinct characteristics of case study, which are being under-construction, lack of social ties between dwellers and high-rise building, determined seven indicators included basic services, Urban and environmental qualities, Family, kin, peers and community, Sense of belonging and continuity, participation, Safety, security and freedom of movement and human scale. With the survey, Informal observation and participation in small communities, essential data has been collected and analyzed by SPSS software. The field study is Shahid-Keshvari town in Isfahan, Iran. Eighty-six middle childhood, children (ages 8-13) participated. The results show Children's satisfaction is correlated with basic services and the quality of the environment, social environment and the safety and security. The considerable number of children and youth (55%) like to live somewhere other than the town. Satisfaction and sense of belonging and continuity have a strong inverse correlation with age. In other words, as age increases, satisfaction and consequently a sense of belonging will be reduced; thus children and youth consider their future somewhere out of the town. The main reason for dissatisfaction was the basic services and social environment. More than half of children (55%) expressed their wish to develop basic services in terms of availability, hierarchy, and quality. Among all recreational places, children showed more interest to the parks. About three-quarters (76%) considered building a park as a crucial item for residents. The significant number of children (54%) want to have a relationship with more friends. This could be due to the serious shortage of the leisure spaces such as parks or playgrounds. Also, the space around the house or space between the apartments has not been designed for play or children’s activities. Moreover, the presence of strangers and construction workers have a negative impact on children's sense of peace and security; 60% of children are afraid of theft and 36% of children found strangers as a menace. The analysis of children’s issues and suggestions provides an insight to plan and design of child-friendly environment in new towns.Keywords: child-friendly city (CFC), child-friendly environment, child participation, under-construction environment, Isfahan Shahid-Keshvari Town
Procedia PDF Downloads 375160 The Healing 'Touch' of Music: A Neuro-Acoustics Approach to Understand Its Therapeutic Effect
Authors: Jagmeet S. Kanwal, Julia F. Langley
Abstract:
Music can heal the body, but a mechanistic understanding of this phenomenon is lacking. This study explores the effects of music presentation on neurologic and physiologic responses leading to metabolic changes in the human body. The mind and body co-exist in a corporeal entity and within this framework, sickness ensues when the mind-body balance goes awry. It is further hypothesized that music has the capacity to directly reset this balance. Two lines of inquiry taken together can provide a mechanistic understanding of this phenomenon 1) Empirical evidence for a sound-sensitive pressure sensor system in the body, and 2) The notion of a “healing center” within the brain that is activated by specific patterns of sounds. From an acoustics perspective, music is spatially distributed as pressure waves ranging from a few cm to several meters in wavelength. These waves interact and propagate in three-dimensions in unique ways, depending on the wavelength. Furthermore, music creates dynamically changing wave-fronts. Frequencies between 200 Hz and 1 kHz generate wavelengths that range from 5'6" to 1 foot. These dimensions are in the range of the body size of most people making it plausible that these pressure waves can geometrically interact with the body surface and create distinct patterns of pressure stimulation across the skin surface. For humans, short wavelength, high frequency (> 200 Hz) sounds are best received via cochlear receptors. For low frequency (< 200 Hz), long wavelength sound vibrations, however, the whole body may act as an ideal receiver. A vast array of highly sensitive pressure receptors (Pacinian corpuscles) is present just beneath the skin surface, as well as in the tendons, bones, several organs in the abdomen, and the sexual organs. Per the available empirical evidence, these receptors contribute to music perception by allowing the whole body to function as a sound receiver, and knowledge of how they function is essential to fully understanding the therapeutic effect of music. Neuroscientific studies have established that music stimulates the limbic system that can trigger states of anxiety, arousal, fear, and other emotions. These emotional states of brain activity play a crucial role in filtering top-down feedback from thoughts and bottom-up sensory inputs to the autonomic system, which automatically regulates bodily functions. Music likely exerts its pleasurable and healing effects by enhancing functional and effective connectivity and feedback mechanisms between brain regions that mediate reward, autonomic, and cognitive processing. Stimulation of pressure receptors under the skin by low-frequency music-induced sensations can activate multiple centers in the brain, including the amygdala, the cingulate cortex, and nucleus accumbens. Melodies in music in the low (< 600 Hz) frequency range may augment auditory inputs after convergence of the pressure-sensitive inputs from the vagus nerve onto emotive processing regions within the limbic system. The integration of music-generated auditory and somato-visceral inputs may lead to a synergistic input to the brain that promotes healing. Thus, music can literally heal humans through “touch” as it energizes the brain’s autonomic system for restoring homeostasis.Keywords: acoustics, brain, music healing, pressure receptors
Procedia PDF Downloads 166159 Photobleaching Kinetics and Epithelial Distribution of Hexylaminoleuilinate Induced PpIX in Rat Bladder Cancer
Authors: Sami El Khatib, Agnès Leroux, Jean-Louis Merlin, François Guillemin, Marie-Ange D’Hallewin
Abstract:
Photodynamic therapy (PDT) is a treatment modality based on the cytotoxic effect occurring on the target tissues by interaction of a photosensitizer with light in the presence of oxygen. One of the major advances in PDT can be attributed to the use of topical aminolevulinic (ALA) to induce Protoporphyrin IX (PpIX) for the treatment of early stage cancers as well as diagnosis. ALA is a precursor of the heme synthesis pathway. Locally delivered to the target tissue ALA overcomes the negative feedback exerted by heme and promotes the transient formation of PpIX in situ to reach critical effective levels in cells and tissue. Whereas early steps of the heme pathway occur in the cytosol, PpIX synthesis is shown to be held in the mitochondrial membranes and PpIX fluorescence is expected to accumulate in close vicinity of the initial building site and to progressively diffuse to the neighboring cytoplasmic compartment or other lipophylic organelles. PpIX is known to be highly reactive and will be degraded when irradiated with light. PpIX photobleaching is believed to be governed by a singlet oxygen mediated mechanism in the presence of oxidized amino acids and proteins. PpIX photobleaching and subsequent spectral phototransformation were described widely in tumor cells incubated in vitro with ALA solution, or ex vivo in human and porcine mucosa superfused with hexylaminolevulinate (hALA). PpIX photobleaching was also studied in vivo, using animal models such as normal or tumor mice skin and orthotopic rat bladder model. Hexyl aminolevulinate a more potent lipophilic derivative of ALA was proposed as an adjunct to standard cystoscopy in the fluorescence diagnosis of bladder cancer and other malignancies. We have previously reported the effectiveness of hALA mediated PDT of rat bladder cancer. Although normal and tumor bladder epithelium exhibit similar fluorescence intensities after intravesical instillation of two hALA concentrations (8 and 16 mM), the therapeutic response at 8mM and 20J/cm2 was completely different from the one observed at 16mM irradiated with the same light dose. Where the tumor is destroyed, leaving the underlying submucosa and muscle intact after an 8 mM instillation, 16mM sensitization and subsequent illumination results in the complete destruction of the underlying bladder wall but leaves the tumor undamaged. The object of the current study is to try to unravel the underlying mechanism for this apparent contradiction. PpIX extraction showed identical amounts of photosensitizer in tumor bearing bladders at both concentrations. Photobleaching experiments revealed mono-exponential decay curves in both situations but with a two times faster decay constant in case of 16mM bladders. Fluorescence microscopy shows an identical fluorescence pattern for normal bladders at both concentrations and tumor bladders at 8mM with bright spots. Tumor bladders at 16 mM exhibit a more diffuse cytoplasmic fluorescence distribution. The different response to PDT with regard to the initial pro-drug concentration can thus be attributed to the different cellular localization.Keywords: bladder cancer, hexyl-aminolevulinate, photobleaching, confocal fluorescence microscopy
Procedia PDF Downloads 407158 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 70157 Admissibility as a Property of Evidence in Modern Conditions
Authors: Iryna Teslenko
Abstract:
According to the provisions of the current criminal procedural legislation of Ukraine, the issue of admissibility of evidence is closely related to both the right to a fair trial and the presumption of innocence. The general rule is that evidence obtained improperly or illegally cannot be taken into account in a court case. Therefore, the evidence base of the prosecution, collected at the stage of the pre-trial investigation, compliance with the requirements of the law during the collection of evidence, is of crucial importance for the criminal process, the violation of which entails the recognition of the relevant evidence as inadmissible, which can nullify all the efforts of the pre-trial investigation body and the prosecution. Therefore, the issue of admissibility of evidence in criminal proceedings is fundamentally important and decisive for the entire process. Research on this issue began in December 2021. At that time, there was still no clear understanding of what needed to be conveyed to the scientific community. In February 2022, the lives of all citizens of Ukraine have totally changed. A war broke out in the country. At a time when the entire world community is on the path of humanizing society, respecting the rights and freedoms of man and citizen, a military conflict has arisen in the middle of Europe - one country attacked another, war crimes are being committed. The world still cannot believe it, but it is happening here and now, people are dying, infrastructure is being destroyed, war crimes are being committed, contrary to the signed and ratified international conventions, and contrary to all the acquisitions and development of world law. At this time, the life of the world has divided into before and after February 24, 2022, the world cannot be the same as it was before, and the approach to solving legal issues in the criminal process, in particular, issues of proving the commission of crimes and the involvement of certain persons in their commission. An international criminal has appeared in the humane European world, who disregards all norms of law and morality, and does not adhere to any principles. Until now, the practice of the European Court of Human Rights and domestic courts of Ukraine treated with certain formalism, such a property of evidence in criminal proceedings as the admissibility of evidence. Currently, we have information that the Office of the Prosecutor of the International Criminal Court in The Hague has started an investigation into war crimes in Ukraine and is documenting them. In our opinion, the world cannot allow formalism in bringing a war criminal to justice. There is a war going on in Ukraine, the cities are under round-the-clock missile fire from the aggressor country, which makes it impossible to carry out certain investigative actions. If due to formal deficiencies, the collected evidence is declared inadmissible, it may lead to the fact that the guilty people will not be punished. And this, in turn, sends a message to other terrorists in the world about the impunity of their actions, the system of deterring criminals from committing criminal offenses (crimes) will collapse due to the understanding of the inevitability of punishment, and this will affect the entire world security and European security in particular. Therefore, we believe that the world cannot allow chaos in the issue of general security, there should be a transformation of the approach in general to such a property of evidence in the criminal process as admissibility in order to ensure the inevitability of the punishment of criminals. We believe that the scientific and legal community should not allow criminals to avoid responsibility. The evil that is destroying Ukraine should be punished. We must all together prove that legal norms are not just words written on paper but rules of behavior of all members of society, their non-observance leads to mandatory responsibility. Everybody who commits crimes will be punished, which is inevitable, and this principle is the guarantor of world security in the future.Keywords: admissibility of evidence, criminal process, war, Ukraine
Procedia PDF Downloads 87156 Testing a Dose-Response Model of Intergenerational Transmission of Family Violence
Authors: Katherine Maurer
Abstract:
Background and purpose: Violence that occurs within families is a global social problem. Children who are victims or witness to family violence are at risk for many negative effects both proximally and distally. One of the most disconcerting long-term effects occurs when child victims become adult perpetrators: the intergenerational transmission of family violence (ITFV). Early identification of those children most at risk for ITFV is needed to inform interventions to prevent future family violence perpetration and victimization. Only about 25-30% of child family violence victims become perpetrators of adult family violence (either child abuse, partner abuse, or both). Prior research has primarily been conducted using dichotomous measures of exposure (yes; no) to predict ITFV, given the low incidence rate in community samples. It is often assumed that exposure to greater amounts of violence predicts greater risk of ITFV. However, no previous longitudinal study with a community sample has tested a dose-response model of exposure to physical child abuse and parental physical intimate partner violence (IPV) using count data of frequency and severity of violence to predict adult ITFV. The current study used advanced statistical methods to test if increased childhood exposure would predict greater risk of ITFV. Methods: The study utilized 3 panels of prospective data from a cohort of 15 year olds (N=338) from the Project on Human Development in Chicago Neighborhoods longitudinal study. The data were comprised of a stratified probability sample of seven ethnic/racial categories and three socio-economic status levels. Structural equation modeling was employed to test a hurdle regression model of dose-response to predict ITFV. A version of the Conflict Tactics Scale was used to measure physical violence victimization, witnessing parental IPV and young adult IPV perpetration and victimization. Results: Consistent with previous findings, past 12 months incidence rates severity and frequency of interpersonal violence were highly skewed. While rates of parental and young adult IPV were about 40%, an unusually high rate of physical child abuse (57%) was reported. The vast majority of a number of acts of violence, whether minor or severe, were in the 1-3 range in the past 12 months. Reported frequencies of more than 5 times in the past year were rare, with less than 10% of those reporting more than six acts of minor or severe physical violence. As expected, minor acts of violence were much more common than acts of severe violence. Overall, regression analyses were not significant for the dose-response model of ITFV. Conclusions and implications: The results of the dose-response model were not significant due to a lack of power in the final sample (N=338). Nonetheless, the value of the approach was confirmed for the future research given the bi-modal nature of the distributions which suggest that in the context of both child physical abuse and physical IPV, there are at least two classes when frequency of acts is considered. Taking frequency into account in predictive models may help to better understand the relationship of exposure to ITFV outcomes. Further testing using hurdle regression models is suggested.Keywords: intergenerational transmission of family violence, physical child abuse, intimate partner violence, structural equation modeling
Procedia PDF Downloads 242155 Emotional State and Cognitive Workload during a Flight Simulation: Heart Rate Study
Authors: Damien Mouratille, Antonio R. Hidalgo-Muñoz, Nadine Matton, Yves Rouillard, Mickael Causse, Radouane El Yagoubi
Abstract:
Background: The monitoring of the physiological activity related to mental workload (MW) on pilots will be useful to improve aviation safety by anticipating human performance degradation. The electrocardiogram (ECG) can reveal MW fluctuations due to either cognitive workload or/and emotional state since this measure exhibits autonomic nervous system modulations. Arguably, heart rate (HR) is one of its most intuitive and reliable parameters. It would be particularly interesting to analyze the interaction between cognitive requirements and emotion in ecologic sets such as a flight simulator. This study aims to explore by means of HR the relation between cognitive demands and emotional activation. Presumably, the effects of cognition and emotion overloads are not necessarily cumulative. Methodology: Eight healthy volunteers in possession of the Private Pilot License were recruited (male; 20.8±3.2 years). ECG signal was recorded along the whole experiment by placing two electrodes on the clavicle and left pectoral of the participants. The HR was computed within 4 minutes segments. NASA-TLX and Big Five inventories were used to assess subjective workload and to consider the influence of individual personality differences. The experiment consisted in completing two dual-tasks of approximately 30 minutes of duration into a flight simulator AL50. Each dual-task required the simultaneous accomplishment of both a pre-established flight plan and an additional task based on target stimulus discrimination inserted between Air Traffic Control instructions. This secondary task allowed us to vary the cognitive workload from low (LC) to high (HC) levels, by combining auditory and visual numerical stimuli to respond to meeting specific criteria. Regarding emotional condition, the two dual-tasks were designed to assure analogous difficulty in terms of solicited cognitive demands. The former was realized by the pilot alone, i.e. Low Arousal (LA) condition. In contrast, the latter generates a high arousal (HA), since the pilot was supervised by two evaluators, filmed and involved into a mock competition with the rest of the participants. Results: Performance for the secondary task showed significant faster reaction times (RT) for HA compared to LA condition (p=.003). Moreover, faster RT was found for LC compared to HC (p < .001) condition. No interaction was found. Concerning HR measure, despite the lack of main effects an interaction between emotion and cognition is evidenced (p=.028). Post hoc analysis showed smaller HR for HA compared to LA condition only for LC (p=.049). Conclusion. The control of an aircraft is a very complex task including strong cognitive demands and depends on the emotional state of pilots. According to the behavioral data, the experimental set has permitted to generate satisfactorily different emotional and cognitive levels. As suggested by the interaction found in HR measure, these two factors do not seem to have a cumulative impact on the sympathetic nervous system. Apparently, low cognitive workload makes pilots more sensitive to emotional variations. These results hint the independency between data processing and emotional regulation. Further physiological data are necessary to confirm and disentangle this relation. This procedure may be useful for monitoring objectively pilot’s mental workload.Keywords: cognitive demands, emotion, flight simulator, heart rate, mental workload
Procedia PDF Downloads 275154 Education Management and Planning with Manual Based
Authors: Purna Bahadur Lamichhane
Abstract:
Education planning and management are foundational pillars for developing effective educational systems. However, in many educational contexts, especially in developing nations, technology-enabled management is still emerging. In such settings, manual-based systems, where instructions and guidelines are physically documented, remain central to educational planning and management. This paper examines the effectiveness, challenges, and potential of manual-based education planning systems in fostering structured, reliable, and adaptable management frameworks. The objective of this study is to explore how a manual-based approach can successfully guide administrators, educators, and policymakers in delivering high-quality education. By using structured, accessible instructions, this approach serves as a blueprint for educational governance, offering clear, actionable steps to achieve institutional goals. Through an analysis of case studies from various regions, the paper identifies key strategies for planning school schedules, managing resources, and monitoring academic and administrative performance without relying on automated systems. The findings underscore the significance of organized documentation, standard operating procedures, and comprehensive manuals that establish uniformity and maintain educational standards across institutions. With a manual-based approach, management can remain flexible, responsive, and user-friendly, especially in environments where internet access and digital literacy are limited. Moreover, it allows for localization, where instructions can be tailored to the unique cultural and socio-economic contexts of the community, thereby increasing relevancy and ownership among local stakeholders. This paper also highlights several challenges associated with manual-based education management. Manual systems often require significant time and human resources for maintenance and updating, potentially leading to inefficiencies and inconsistencies over time. Furthermore, manual records can be susceptible to loss, damage, and limited accessibility, which may affect decision-making and institutional memory. There is also the risk of siloed information, where crucial data resides with specific individuals rather than being accessible across the organization. However, with proper training and regular oversight, many of these limitations can be mitigated. The study further explores the potential for hybrid approaches, combining manual planning with selected digital tools for record-keeping, reporting, and analytics. This transitional strategy can enable schools and educational institutions to gradually embrace digital solutions without discarding the familiarity and reliability of manual instructions. In conclusion, this paper advocates for a balanced, context-sensitive approach to education planning and management. While digital systems hold the potential to streamline processes, manual-based systems offer resilience, inclusivity, and adaptability for institutions where technology adoption may be constrained. Ultimately, by reinforcing the importance of structured, detailed manuals and instructional guides, educational institutions can build robust management frameworks that facilitate both short-term successes and long-term growth in their educational mission. This research aims to provide a reference for policymakers, educators, and administrators seeking practical, low-cost, and adaptable solutions for sustainable educational planning and management.Keywords: educatoin, planning, management, manual
Procedia PDF Downloads 12153 An Initial Assessment of the Potential Contibution of 'Community Empowerment' to Mitigating the Drivers of Deforestation and Forest Degradation, in Giam Siak Kecil-Bukit Batu Biosphere Reserve
Authors: Arzyana Sunkar, Yanto Santosa, Siti Badriyah Rushayati
Abstract:
Indonesia has experienced annual forest fires that have rapidly destroyed and degraded its forests. Fires in the peat swamp forests of Riau Province, have set the stage for problems to worsen, this being the ecosystem most prone to fires (which are also the most difficult, to extinguish). Despite various efforts to curb deforestation, and forest degradation processes, severe forest fires are still occurring. To find an effective solution, the basic causes of the problems must be identified. It is therefore critical to have an in-depth understanding of the underlying causal factors that have contributed to deforestation and forest degradation as a whole, in order to attain reductions in their rates. An assessment of the drivers of deforestation and forest degradation was carried out, in order to design and implement measures that could slow these destructive processes. Research was conducted in Giam Siak Kecil–Bukit Batu Biosphere Reserve (GSKBB BR), in the Riau Province of Sumatera, Indonesia. A biosphere reserve was selected as the study site because such reserves aim to reconcile conservation with sustainable development. A biosphere reserve should promote a range of local human activities, together with development values that are in line spatially and economically with the area conservation values, through use of a zoning system. Moreover, GSKBB BR is an area with vast peatlands, and is experiencing forest fires annually. Various factors were analysed to assess the drivers of deforestation and forest degradation in GSKBB BR; data were collected from focus group discussions with stakeholders, key informant interviews with key stakeholders, field observation and a literature review. Landsat satellite imagery was used to map forest-cover changes for various periods. Analysis of landsat images, taken during the period 2010-2014, revealed that within the non-protected area of core zone, there was a trend towards decreasing peat swamp forest areas, increasing land clearance, and increasing areas of community oil-palm and rubber plantations. Fire was used for land clearing and most of the forest fires occurred in the most populous area (the transition area). The study found a relationship between the deforested/ degraded areas, and certain distance variables, i.e. distance from roads, villages and the borders between the core area and the buffer zone. The further the distance from the core area of the reserve, the higher was the degree of deforestation and forest degradation. Research findings suggested that agricultural expansion may be the direct cause of deforestation and forest degradation in the reserve, whereas socio-economic factors were the underlying driver of forest cover changes; such factors consisting of a combination of socio-cultural, infrastructural, technological, institutional (policy and governance), demographic (population pressure) and economic (market demand) considerations. These findings indicated that local factors/problems were the critical causes of deforestation and degradation in GSKBB BR. This research therefore concluded that reductions in deforestation and forest degradation in GSKBB BR could be achieved through ‘local actor’-tailored approaches such as community empowermentKeywords: Actor-led solution, community empowerment, drivers of deforestation and forest degradation, Giam Siak Kecil – Bukit Batu Biosphere Reserve
Procedia PDF Downloads 348152 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 72151 Trajectories of PTSD from 2-3 Years to 5-6 Years among Asian Americans after the World Trade Center Attack
Authors: Winnie Kung, Xinhua Liu, Debbie Huang, Patricia Kim, Keon Kim, Xiaoran Wang, Lawrence Yang
Abstract:
Considerable Asian Americans were exposed to the World Trade Center attack due to the proximity of the site to Chinatown and a sizeable number of South Asians working in the collapsed and damaged buildings nearby. Few studies focused on Asians in examining the disaster’s mental health impact, and even less longitudinal studies were reported beyond the first couple of years after the event. Based on the World Trade Center Health Registry, this study examined the trajectory of PTSD of individuals directly exposed to the attack from 2-3 to 5-6 years after the attack, comparing Asians against the non-Hispanic White group. Participants included 2,431 Asians and 31,455 Whites. Trajectories were delineated into the resilient, chronic, delayed-onset and remitted groups using PTSD checklist cut-off score at 44 at the 2 waves. Logistic regression analyses were conducted to compare the poorer trajectories against the resilient as a reference group, using predictors of baseline sociodemographic, exposure to the disaster, lower respiratory symptoms and previous depression/anxiety disorder diagnosis, and recruitment source as the control variable. Asians had significant lower socioeconomic status in terms of income, education and employment status compared to Whites. Over 3/4 of participants from both races were resilient, though slightly less for Asians than Whites (76.5% vs 79.8%). Asians had a higher proportion with chronic PTSD (8.6% vs 7.4%) and remission (5.9% vs 3.4%) than Whites. A considerable proportion of participants had delayed-onset in both races (9.1% Asians vs 9.4% Whites). The distribution of trajectories differed significantly by race (p<0.0001) with Asians faring poorer. For Asians, in the chronic vs resilient group, significant protective factors included age >65, annual household income >$50,000, and never married vs married/cohabiting; risk factors were direct disaster exposure, job loss due to 9/11, lost someone, and tangible loss; lower respiratory symptoms and previous mental disorder diagnoses. Similar protective and risk factors were noted for the delayed-onset group, except education being protective; and being an immigrant a risk. Between the 2 comparisons, the chronic group was more vulnerable than the delayed-onset as expected. It should also be noted that in both comparisons, Asians’ current employment status had no significant impact on their PTSD trajectory. Comparing between Asians against Whites, the direction of the relationships between the predictors and the PTSD trajectories were mostly the same, although more factors were significant for Whites than for Asians. A few factors showed significant racial difference: Higher risk for lower respiratory symptoms for Whites than Asians, higher risk for pre-9/11 mental disorder diagnosis for Asians than Whites, and immigrant a risk factor for the remitted vs resilient groups for Whites but not for Asians. Over 17% Asians still suffered from PTSD 5-6 years after the WTC attack signified its persistent impact which incurred substantial human, social and economic costs. The more disadvantaged socioeconomic status of Asians rendered them more vulnerable in their mental health trajectories relative to Whites. Together with their well-documented low tendency to seek mental health help, outreach effort to this population is needed to ensure follow-up treatment and prevention.Keywords: PTSD, Asian Americans, World Trade Center Attack, racial differences
Procedia PDF Downloads 264150 User-Controlled Color-Changing Textiles: From Prototype to Mass Production
Authors: Joshua Kaufman, Felix Tan, Morgan Monroe, Ayman Abouraddy
Abstract:
Textiles and clothing have been a staple of human existence for millennia, yet the basic structure and functionality of textile fibers and yarns has remained unchanged. While color and appearance are essential characteristics of a textile, an advancement in the fabrication of yarns that allows for user-controlled dynamic changes to the color or appearance of a garment has been lacking. Touch-activated and photosensitive pigments have been used in textiles, but these technologies are passive and cannot be controlled by the user. The technology described here allows the owner to control both when and in what pattern the fabric color-change takes place. In addition, the manufacturing process is compatible with mass-producing the user-controlled, color-changing yarns. The yarn fabrication utilizes a fiber spinning system that can produce either monofilament or multifilament yarns. For products requiring a more robust fabric (backpacks, purses, upholstery, etc.), larger-diameter monofilament yarns with a coarser weave are suitable. Such yarns are produced using a thread-coater attachment to encapsulate a 38-40 AWG metal wire inside a polymer sheath impregnated with thermochromic pigment. Conversely, products such as shirts and pants requiring yarns that are more flexible and soft against the skin comprise multifilament yarns of much smaller-diameter individual fibers. Embedding a metal wire in a multifilament fiber spinning process has not been realized to date. This research has required collaboration with Hills, Inc., to design a liquid metal-injection system to be combined with fiber spinning. The new system injects molten tin into each of 19 filaments being spun simultaneously into a single yarn. The resulting yarn contains 19 filaments, each with a tin core surrounded by a polymer sheath impregnated with thermochromic pigment. The color change we demonstrate is distinct from garments containing LEDs that emit light in various colors. The pigment itself changes its optical absorption spectrum to appear a different color. The thermochromic color-change is induced by a temperature change in the inner metal wire within each filament when current is applied from a small battery pack. The temperature necessary to induce the color change is near body temperature and not noticeable by touch. The prototypes already developed either use a simple push button to activate the battery pack or are wirelessly activated via a smart-phone app over Wi-Fi. The app allows the user to choose from different activation patterns of stripes that appear in the fabric continuously. The power requirements are mitigated by a large hysteresis in the activation temperature of the pigment and the temperature at which there is full color return. This was made possible by a collaboration with Chameleon International to develop a new, customized pigment. This technology enables a never-before seen capability: user-controlled, dynamic color and pattern change in large-area woven and sewn textiles and fabrics with wide-ranging applications from clothing and accessories to furniture and fixed-installation housing and business décor. The ability to activate through Wi-Fi opens up possibilities for the textiles to be part of the ‘Internet of Things.’ Furthermore, this technology is scalable to mass-production levels for wide-scale market adoption.Keywords: activation, appearance, color, manufacturing
Procedia PDF Downloads 278149 Degradation of Diclofenac in Water Using FeO-Based Catalytic Ozonation in a Modified Flotation Cell
Authors: Miguel A. Figueroa, José A. Lara-Ramos, Miguel A. Mueses
Abstract:
Pharmaceutical residues are a section of emerging contaminants of anthropogenic origin that are present in a myriad of waters with which human beings interact daily and are starting to affect the ecosystem directly. Conventional waste-water treatment systems are not capable of degrading these pharmaceutical effluents because their designs cannot handle the intermediate products and biological effects occurring during its treatment. That is why it is necessary to hybridize conventional waste-water systems with non-conventional processes. In the specific case of an ozonation process, its efficiency highly depends on a perfect dispersion of ozone, long times of interaction of the gas-liquid phases and the size of the ozone bubbles formed through-out the reaction system. In order to increase the efficiency of these parameters, the use of a modified flotation cell has been proposed recently as a reactive system, which is used at an industrial level to facilitate the suspension of particles and spreading gas bubbles through the reactor volume at a high rate. The objective of the present work is the development of a mathematical model that can closely predict the kinetic rates of reactions taking place in the flotation cell at an experimental scale by means of identifying proper reaction mechanisms that take into account the modified chemical and hydrodynamic factors in the FeO-catalyzed Ozonation of Diclofenac aqueous solutions in a flotation cell. The methodology is comprised of three steps: an experimental phase where a modified flotation cell reactor is used to analyze the effects of ozone concentration and loading catalyst over the degradation of Diclofenac aqueous solutions. The performance is evaluated through an index of utilized ozone, which relates the amount of ozone supplied to the system per milligram of degraded pollutant. Next, a theoretical phase where the reaction mechanisms taking place during the experiments must be identified and proposed that details the multiple direct and indirect reactions the system goes through. Finally, a kinetic model is obtained that can mathematically represent the reaction mechanisms with adjustable parameters that can be fitted to the experimental results and give the model a proper physical meaning. The expected results are a robust reaction rate law that can simulate the improved results of Diclofenac mineralization on water using the modified flotation cell reactor. By means of this methodology, the following results were obtained: A robust reaction pathways mechanism showcasing the intermediates, free-radicals and products of the reaction, Optimal values of reaction rate constants that simulated Hatta numbers lower than 3 for the system modeled, degradation percentages of 100%, TOC (Total organic carbon) removal percentage of 69.9 only requiring an optimal value of FeO catalyst of 0.3 g/L. These results showed that a flotation cell could be used as a reactor in ozonation, catalytic ozonation and photocatalytic ozonation processes, since it produces high reaction rate constants and reduces mass transfer limitations (Ha > 3) by producing microbubbles and maintaining a good catalyst distribution.Keywords: advanced oxidation technologies, iron oxide, emergent contaminants, AOTS intensification
Procedia PDF Downloads 112148 Composite Electrospun Aligned PLGA/Curcumin/Heparin Nanofibrous Membranes for Wound Dressing Application
Authors: Jyh-Ping Chen, Yu-Tin Lai
Abstract:
Wound healing is a complicated process involving overlapping hemostasis, inflammation, proliferation, and maturation phases. Ideal wound dressings can replace native skin functions in full thickness skin wounds through faster healing rate and also by reducing scar formation. Poly(lactic-co-glycolic acid) (PLGA) is an U.S. FDA approved biodegradable polymer to be used as ideal wound dressing material. Several in vitro and in vivo studies have demonstrated the effectiveness of curcumin in decreasing the release of inflammatory cytokines, inhibiting enzymes associated with inflammations, and scavenging free radicals that are the major cause of inflammation during wound healing. Heparin has binding affinities to various growth factors. With the unique and beneficial features offered by those molecules toward the complex process of wound healing, we postulate a composite wound dressing constructed from PLGA, curcumin and heparin would be a good candidate to accelerate scarless wound healing. In this work, we use electrospinning to prepare curcumin-loaded aligned PLGA nanofibrous membranes (PC NFMs). PC NFMs were further subject to oxygen plasma modification and surfaced-grafted with heparin through carbodiimide-mediated covalent bond formation to prepare curcumin-loaded PLGA-g-heparin (PCH) NFMs. The nanofibrous membranes could act as three-dimensional scaffolds to attract fibroblast migration, reduce inflammation, and increase wound-healing related growth factors concentrations at wound sites. From scanning electron microscopy analysis, the nanofibers in each NFM are with diameters ranging from 456 to 479 nm and with alignment angles within 0.5°. The NFMs show high tensile strength and good water absorptivity and provide suitable pore size for nutrients/wastes transport. Exposure of human dermal fibroblasts to the extraction medium of PC or PCH NFM showed significant protective effects against hydrogen peroxide than PLGA NFM. In vitro wound healing assays also showed that the extraction medium of PCH NFM showed significantly better migration ability toward fibroblasts than PC NFM, which is further better than PLGA NFM. The in vivo healing efficiency of the NFMs was further evaluated by a full thickness excisional wound healing diabetic rat model. After 14 days, PCH NFMs exhibits 86% wound closure rate, which is significantly different from other groups (79% for PC and 73% for PLGA NFM). Real-time PCR analysis indicated PC and PCH NFMs down regulated anti-oxidative enzymes like glutathione peroxidase (GPx) and superoxide dismutase (SOD), which are well-known transcription factors involved in cellular inflammatory responses to stimuli. From histology, the wound area treated with PCH NFMs showed more vascular lumen formation from immunohistochemistry of α-smooth muscle actin. The wound site also had more collagen type III (65.8%) expression and less collagen type I (3.5%) expression, indicating scar-less wound healing. From Western blot analysis, the PCH NFM showed good affinity toward growth factors from increased concentration of transforming growth factor-β (TGF-β) and fibroblast growth factor-2 (FGF-2) at the wound site to accelerate wound healing. From the results, we suggest PCH NFM as a promising candidate for wound dressing applications.Keywords: Curcumin, heparin, nanofibrous membrane, poly(lactic-co-glycolic acid) (PLGA), wound dressing
Procedia PDF Downloads 155147 Thermally Stable Crystalline Triazine-Based Organic Polymeric Nanodendrites for Mercury(2+) Ion Sensing
Authors: Dimitra Das, Anuradha Mitra, Kalyan Kumar Chattopadhyay
Abstract:
Organic polymers, constructed from light elements like carbon, hydrogen, nitrogen, oxygen, sulphur, and boron atoms, are the emergent class of non-toxic, metal-free, environmental benign advanced materials. Covalent triazine-based polymers with a functional triazine group are significant class of organic materials due to their remarkable stability arising out of strong covalent bonds. They can conventionally form hydrogen bonds, favour π–π contacts, and they were recently revealed to be involved in interesting anion–π interactions. The present work mainly focuses upon the development of a single-crystalline, highly cross-linked triazine-based nitrogen-rich organic polymer with nanodendritic morphology and significant thermal stability. The polymer has been synthesized through hydrothermal treatment of melamine and ethylene glycol resulting in cross-polymerization via condensation-polymerization reaction. The crystal structure of the polymer has been evaluated by employing Rietveld whole profile fitting method. The polymer has been found to be composed of monoclinic melamine having space group P21/a. A detailed insight into the chemical structure of the as synthesized polymer has been elucidated by Fourier Transform Infrared Spectroscopy (FTIR) and Raman spectroscopic analysis. X-Ray Photoelectron Spectroscopic (XPS) analysis has also been carried out for further understanding of the different types of linkages required to create the backbone of the polymer. The unique rod-like morphology of the triazine based polymer has been revealed from the images obtained from Field Emission Scanning Electron Microscopy (FESEM) and Transmission Electron Microscopy (TEM). Interestingly, this polymer has been found to selectively detect mercury (Hg²⁺) ions at an extremely low concentration through fluorescent quenching with detection limit as low as 0.03 ppb. The high toxicity of mercury ions (Hg²⁺) arise from its strong affinity towards the sulphur atoms of biological building blocks. Even a trace quantity of this metal is dangerous for human health. Furthermore, owing to its small ionic radius and high solvation energy, Hg²⁺ ions remain encapsulated by water molecules making its detection a challenging task. There are some existing reports on fluorescent-based heavy metal ion sensors using covalent organic frameworks (COFs) but reports on mercury sensing using triazine based polymers are rather undeveloped. Thus, the importance of ultra-trace detection of Hg²⁺ ions with high level of selectivity and sensitivity has contemporary significance. A plausible sensing phenomenon by the polymer has been proposed to understand the applicability of the material as a potential sensor. The impressive sensitivity of the polymer sample towards Hg²⁺ is the very first report in the field of highly crystalline triazine based polymers (without the introduction of any sulphur groups or functionalization) towards mercury ion detection through photoluminescence quenching technique. This crystalline metal-free organic polymer being cheap, non-toxic and scalable has current relevance and could be a promising candidate for Hg²⁺ ion sensing at commercial level.Keywords: fluorescence quenching , mercury ion sensing, single-crystalline, triazine-based polymer
Procedia PDF Downloads 136146 Social Marketing – An Integrated and Comprehensive Nutrition Communication Strategy to Improve the Iron Nutriture among Preschool Children
Authors: Manjula Kola, K. Chandralekha
Abstract:
Anaemia is one of the world’s most widespread health problems. Prevalence of anemia in south Asia is among the highest in the world. Iron deficiency anemia accounts for almost 85 percent of all types of anemia in India and affects more than half of the total population. Women of childbearing age particularly pregnant women, infants, preschool children and adolescents are at greatest risk of developing iron deficiency anemia. In India, 74 percent children between 6-35 months of age are anemic. Children between 1-6 years in major cities are found with a high prevalence rate of 64.8 percent. Iron deficiency anemia is not only a public health problem, but also a development problem. Its prevention and reduction must be viewed as investment in human capital that will enhance development and reduce poverty. Ending this hidden hunger in the form of iron deficiency is the most important achievable international health goal. Eliminating the underlying problem is essential to the sustained elimination of the iron deficiency anemia. The intervention programmes toward the sustained elimination need to be broadly based so that interventions become accepted community practices. Hence, intervention strategies need to go well beyond traditional health and nutrition systems and based upon empowering people and communities so that they will be capable of arranging for and sustaining an adequate intake of foods with respect to iron, independent of external support. Such strategies must necessarily be multisectoral and integrate interventions with social communications, evaluation and surveillance. The main objective of the study was to design a community based Nutrition intervention using theoretical framework of social marketing to sustain improvement of iron nutriture among preschool children. In order to carryout the study eight rural communities In Chittoor district of Andhra Pradesh, India were selected. A formative research was carryout for situational analysis and baseline data was generated with regard to demographic and socioeconomic status, dietary intakes, Knowledge, Attitude and Practices of the mothers of preschool children, clinical and hemoglobin status of the target group. Based on the formative research results, the research area was divides into four groups as experimental area I,II,III and control area. A community based, integrated and comprehensive social marketing intervention was designed based on various theories and models of nutrition education/ communication. In Experimental area I, Nutrition intervention using social marketing and a weekly iron folic acid supplementation was given to improve iron nutriture of preschool children. In experimental area II, Social marketing alone was implemented and in experimental area III Iron supplementation alone was given. No intervention was given in control area. The Impact evaluation revealed that among different interventions tested, the integrated social marketing intervention resulted best outcomes. The overall observations of the study state that social marketing, an integrated and functional strategy for nutrition communication to prevent and control iron deficiency. Various theoretical frame works / models for nutrition communication facilitate to design culturally appropriate interventions thus achieved improvements in the knowledge, attitude and practices there by resulting successful impact on nutritional status of the target groups.Keywords: anemia, iron deficiency, social marketing, theoretical framework
Procedia PDF Downloads 405145 Developing a Methodology to Examine Psychophysiological Responses during Stress Exposure and Relaxation: An Experimental Paradigm
Authors: M. Velana, G. Rinkenauer
Abstract:
Nowadays, nurses are facing unprecedented amounts of pressure due to the ongoing global health demands. Work-related stress can cause a high physical and psychological workload, which can lead, in turn, to burnout. On the physiological level, stress triggers an initial activation of the sympathetic nervous and adrenomedullary systems resulting in increases in cardiac activity. Furthermore, activation of the hypothalamus-pituitary-adrenal axis provokes endocrine and immune changes leading to the release of cortisol and cytokines in an effort to re-establish body balance. Based on the current state of the literature, it has been identified that resilience and mindfulness exercises among nurses can effectively decrease stress and improve mood. However, it is still unknown what relaxation techniques would be suitable for and to what extent would be effective to decrease psychophysiological arousal deriving from either a physiological or a psychological stressor. Moreover, although cardiac activity and cortisol are promising candidates to examine the effectiveness of relaxation to reduce stress, it still remains to shed light on the role of cytokines in this process so as to thoroughly understand the body’s response to stress and to relaxation. Therefore, the main aim of the present study is to develop a comprehensive experimental paradigm and assess different relaxation techniques, namely progressive muscle relaxation and a mindfulness exercise originating from cognitive therapy by means of biofeedback, under highly controlled laboratory conditions. An experimental between-subject design will be employed, where 120 participants will be randomized either to a physiological or a psychological stress-related experiment. Particularly, the cold pressor test refers to a procedure in which the participants have to immerse their non-dominant hands into ice water (2-3 °C) for 3 min. The participants are requested to keep their hands in the water throughout the whole duration. However, they can immediately terminate the test in case it would be barely tolerable. A pre-test anticipation phase and a post-stress period of 3 min, respectively, are planned. The Trier Social Stress Test will be employed to induce psychological stress. During this laboratory stressor, the participants are instructed to give a 5-min speech in front of a committee of communication specialists. Before the main task, there is a 10-min anticipation period. Subsequently, participants are requested to perform an unexpected arithmetic task. After stress exposure, the participants will perform one of the relaxation exercises (treatment condition) or watch a neutral video (control condition). Electrocardiography, salivary samples, and self-report will be collected at different time points. The preliminary results deriving from the pilot study showed that the aforementioned paradigm could effectively induce stress reactions and that relaxation might decrease the impact of stress exposure. It is of utmost importance to assess how the human body responds under different stressors and relaxation exercises so that an evidence-based intervention could be transferred in a clinical setting to improve nurses’ general health. Based on suggestive future laboratory findings, the research group plans to conduct a pilot-level randomized study to decrease stress and promote well-being among nurses who work in the stress-riddled environment of a hospital located in Northern Germany.Keywords: nurses, psychophysiology, relaxation, stress
Procedia PDF Downloads 110144 Diamond-Like Carbon-Based Structures as Functional Layers on Shape-Memory Alloy for Orthopedic Applications
Authors: Piotr Jablonski, Krzysztof Mars, Wiktor Niemiec, Agnieszka Kyziol, Marek Hebda, Halina Krawiec, Karol Kyziol
Abstract:
NiTi alloys, possessing unique mechanical properties such as pseudoelasticity and shape memory effect (SME), are suitable for many applications, including implanthology and biomedical devices. Additionally, these alloys have similar values of elastic modulus to those of human bones, what is very important in orthopedics. Unfortunately, the environment of physiological fluids in vivo causes unfavorable release of Ni ions, which in turn may lead to metalosis as well as allergic reactions and toxic effects in the body. For these reasons, the surface properties of NiTi alloys should be improved to increase corrosion resistance, taking into account biological properties, i.e. excellent biocompatibility. The prospective in this respect are layers based on DLC (Diamond-Like Carbon) structures, which are an attractive solution for many applications in implanthology. These coatings (DLC), usually obtained by PVD (Physical Vapour Deposition) and PA CVD (Plasma Activated Chemical Vapour Deposition) methods, can be also modified by doping with other elements like silicon, nitrogen, oxygen, fluorine, titanium and silver. These methods, in combination with a suitably designed structure of the layers, allow the possibility co-decide about physicochemical and biological properties of modified surfaces. Mentioned techniques provide specific physicochemical properties of substrates surface in a single technological process. In this work, the following types of layers based on DLC structures (incl. Si-DLC or Si/N-DLC) were proposed as prospective and attractive approach in surface functionalization of shape memory alloy. Nitinol substrates were modified in plasma conditions, using RF CVD (Radio Frequency Chemical Vapour Deposition). The influence of plasma treatment on the useful properties of modified substrates after deposition DLC layers doped with silica and/or nitrogen atoms, as well as only pre-treated in O2 NH3 plasma atmosphere in a RF reactor was determined. The microstructure and topography of the modified surfaces were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM). Furthermore, the atomic structure of coatings was characterized by IR and Raman spectroscopy. The research also included the evaluation of surface wettability, surface energy as well as the characteristics of selected mechanical and biological properties of the layers. In addition, the corrosion properties of alloys after and before modification in the physiological saline were also investigated. In order to determine the corrosion resistance of NiTi in the Ringer solution, the potentiodynamic polarization curves (LSV – Linear Sweep Voltamperometry) were plotted. Furthermore, the evolution of corrosion potential versus immersion time of TiNi alloy in Ringer solution was performed. Based on all carried out research, the usefullness of proposed modifications of nitinol for medical applications was assessed. It was shown, inter alia, that the obtained Si-DLC layers on the surface of NiTi alloy exhibit a characteristic complex microstructure, increased surface development, which is an important aspect in improving the osteointegration of an implant. Furthermore, the modified alloy exhibits biocompatibility, the transfer of the metal (Ni, Ti) to Ringer’s solution is clearly limited.Keywords: bioactive coatings, corrosion resistance, doped DLC structure, NiTi alloy, RF CVD
Procedia PDF Downloads 235143 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 194142 A Bibliometric Analysis of Ukrainian Research Articles on SARS-COV-2 (COVID-19) in Compliance with the Standards of Current Research Information Systems
Authors: Sabina Auhunas
Abstract:
These days in Ukraine, Open Science dramatically develops for the sake of scientists of all branches, providing an opportunity to take a more close look on the studies by foreign scientists, as well as to deliver their own scientific data to national and international journals. However, when it comes to the generalization of data on science activities by Ukrainian scientists, these data are often integrated into E-systems that operate inconsistent and barely related information sources. In order to resolve these issues, developed countries productively use E-systems, designed to store and manage research data, such as Current Research Information Systems that enable combining uncompiled data obtained from different sources. An algorithm for selecting SARS-CoV-2 research articles was designed, by means of which we collected the set of papers published by Ukrainian scientists and uploaded by August 1, 2020. Resulting metadata (document type, open access status, citation count, h-index, most cited documents, international research funding, author counts, the bibliographic relationship of journals) were taken from Scopus and Web of Science databases. The study also considered the info from COVID-19/SARS-CoV-2-related documents published from December 2019 to September 2020, directly from documents published by authors depending on territorial affiliation to Ukraine. These databases are enabled to get the necessary information for bibliometric analysis and necessary details: copyright, which may not be available in other databases (e.g., Science Direct). Search criteria and results for each online database were considered according to the WHO classification of the virus and the disease caused by this virus and represented (Table 1). First, we identified 89 research papers that provided us with the final data set after consolidation and removing duplication; however, only 56 papers were used for the analysis. The total number of documents by results from the WoS database came out at 21641 documents (48 affiliated to Ukraine among them) in the Scopus database came out at 32478 documents (41 affiliated to Ukraine among them). According to the publication activity of Ukrainian scientists, the following areas prevailed: Education, educational research (9 documents, 20.58%); Social Sciences, interdisciplinary (6 documents, 11.76%) and Economics (4 documents, 8.82%). The highest publication activity by institution types was reported in the Ministry of Education and Science of Ukraine (its percent of published scientific papers equals 36% or 7 documents), Danylo Halytsky Lviv National Medical University goes next (5 documents, 15%) and P. L. Shupyk National Medical Academy of Postgraduate Education (4 documents, 12%). Basically, research activities by Ukrainian scientists were funded by 5 entities: Belgian Development Cooperation, the National Institutes of Health (NIH, U.S.), The United States Department of Health & Human Services, grant from the Whitney and Betty MacMillan Center for International and Area Studies at Yale, a grant from the Yale Women Faculty Forum. Based on the results of the analysis, we obtained a set of published articles and preprints to be assessed on the variety of features in upcoming studies, including citation count, most cited documents, a bibliographic relationship of journals, reference linking. Further research on the development of the national scientific E-database continues using brand new analytical methods.Keywords: content analysis, COVID-19, scientometrics, text mining
Procedia PDF Downloads 115141 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic
Abstract:
The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.Keywords: cosmic ray, human dose, solar flare, aviation
Procedia PDF Downloads 206140 Mitigating Urban Flooding through Spatial Planning Interventions: A Case of Bhopal City
Authors: Rama Umesh Pandey, Jyoti Yadav
Abstract:
Flooding is one of the waterborne disasters that causes extensive destruction in urban areas. Developing countries are at a higher risk of such damage and more than half of the global flooding events take place in Asian countries including India. Urban flooding is more of a human-induced disaster rather than natural. This is highly influenced by the anthropogenic factors, besides metrological and hydrological causes. Unplanned urbanization and poor management of cities enhance the impact manifold and cause huge loss of life and property in urban areas. It is an irony that urban areas have been facing water scarcity in summers and flooding during monsoon. This paper is an attempt to highlight the factors responsible for flooding in a city especially from an urban planning perspective and to suggest mitigating measures through spatial planning interventions. Analysis has been done in two stages; first is to assess the impacts of previous flooding events and second to analyze the factors responsible for flooding at macro and micro level in cities. Bhopal, a city in Central India having nearly two million population, has been selected for the study. The city has been experiencing flooding during heavy rains in monsoon. The factors responsible for urban flooding were identified through literature review as well as various case studies from different cities across the world and India. The factors thus identified were analyzed for both macro and micro level influences. For macro level, the previous flooding events that have caused huge destructions were analyzed and the most affected areas in Bhopal city were identified. Since the identified area was falling within the catchment of a drain so the catchment area was delineated for the study. The factors analyzed were: rainfall pattern to calculate the return period using Weibull’s formula; imperviousness through mapping in ArcGIS; runoff discharge by using Rational method. The catchment was divided into micro watersheds and the micro watershed having maximum impervious surfaces was selected to analyze the coverage and effect of physical infrastructure such as: storm water management; sewerage system; solid waste management practices. The area was further analyzed to assess the extent of violation of ‘building byelaws’ and ‘development control regulations’ and encroachment over the natural water streams. Through analysis, the study has revealed that the main issues have been: lack of sewerage system; inadequate storm water drains; inefficient solid waste management in the study area; violation of building byelaws through extending building structures ether on to the drain or on the road; encroachments by slum dwellers along or on to the drain reducing the width and capacity of the drain. Other factors include faulty culvert’s design resulting in back water effect. Roads are at higher level than the plinth of houses which creates submersion of their ground floors. The study recommends spatial planning interventions for mitigating urban flooding and strategies for management of excess rain water during monsoon season. Recommendations have also been made for efficient land use management to mitigate water logging in areas vulnerable to flooding.Keywords: mitigating strategies, spatial planning interventions, urban flooding, violation of development control regulations
Procedia PDF Downloads 329139 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 49