Search results for: children with attention deficit
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7395

Search results for: children with attention deficit

195 The Pigeon Circovirus Evolution and Epidemiology under Conditions of One Loft Race Rearing System: The Preliminary Results

Authors: Tomasz Stenzel, Daria Dziewulska, Ewa Łukaszuk, Joy Custer, Simona Kraberger, Arvind Varsani

Abstract:

Viral diseases, especially those leading to impairment of the immune system, are among the most important problems in avian pathology. However, there is not much data available on this subject other than commercial poultry bird species. Recently, increasing attention has been paid to racing pigeons, which have been refined for many years in terms of their ability to return to their place of origin. Currently, these birds are used for races at distances from 100 to 1000 km, and winning pigeons are highly valuable. The rearing system of racing pigeons contradicts the principles of biosecurity, as birds originating from various breeding facilities are commonly transported and reared in “One Loft Race” (OLR) facilities. This favors the spread of multiple infections and provides conditions for the development of novel variants of various pathogens through recombination. One of the most significant viruses occurring in this avian species is the pigeon circovirus (PiCV), which is detected in ca. 70% of pigeons. Circoviruses are characterized by vast genetic diversity which is due to, among other things, the recombination phenomenon. It consists of an exchange of fragments of genetic material among various strains of the virus during the infection of one organism. The rate and intensity of the development of PiCV recombinants have not been determined so far. For this reason, an experiment was performed to investigate the frequency of development of novel PiCV recombinants in racing pigeons kept in OLR-type conditions. 15 racing pigeons originating from 5 different breeding facilities, subclinically infected with various PiCV strains, were housed in one room for eight weeks, which was supposed to mimic the conditions of OLR rearing. Blood and swab samples were collected from birds every seven days to recover complete PiCV genomes that were amplified through Rolling Circle Amplification (RCA), cloned, sequenced, and subjected to bioinformatic analyses aimed at determining the genetic diversity and the dynamics of recombination phenomenon among the viruses. In addition, virus shedding rate/level of viremia, expression of the IFN-γ and interferon-related genes, and anti-PiCV antibodies were determined to enable the complete analysis of the course of infection in the flock. Initial results have shown that 336 full PiCV genomes were obtained, exhibiting nucleotide similarity ranging from 86.6 to 100%, and 8 of those were recombinants originating from viruses of different lofts of origin. The first recombinant appeared after seven days of experiment, but most of the recombinants appeared after 14 and 21 days of joint housing. The level of viremia and virus shedding was the highest in the 2nd week of the experiment and gradually decreased to the end of the experiment, which partially corresponded with Mx 1 gene expression and antibody dynamics. The results have shown that the OLR pigeon-rearing system could play a significant role in spreading infectious agents such as circoviruses and contributing to PiCV evolution through recombination. Therefore, it is worth considering whether a popular gambling game such as pigeon racing is sensible from both animal welfare and epidemiological point of view.

Keywords: pigeon circovirus, recombination, evolution, one loft race

Procedia PDF Downloads 72
194 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text

Authors: Duncan Wallace, M-Tahar Kechadi

Abstract:

In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.

Keywords: artificial neural networks, data-mining, machine learning, medical informatics

Procedia PDF Downloads 131
193 Treatment Process of Sludge from Leachate with an Activated Sludge System and Extended Aeration System

Authors: A. Chávez, A. Rodríguez, F. Pinzón

Abstract:

Society is concerned about measures of environmental, economic and social impacts generated in the solid waste disposal. These places of confinement, also known as landfills, are locations where problems of pollution and damage to human health are reduced. They are technically designed and operated, using engineering principles, storing the residue in a small area, compact it to reduce volume and covering them with soil layers. Problems preventing liquid (leachate) and gases produced by the decomposition of organic matter. Despite planning and site selection for disposal, monitoring and control of selected processes, remains the dilemma of the leachate as extreme concentration of pollutants, devastating soil, flora and fauna; aggressive processes requiring priority attention. A biological technology is the activated sludge system, used for tributaries with high pollutant loads. Since transforms biodegradable dissolved and particulate matter into CO2, H2O and sludge; transform suspended and no Settleable solids; change nutrients as nitrogen and phosphorous; and degrades heavy metals. The microorganisms that remove organic matter in the processes are in generally facultative heterotrophic bacteria, forming heterogeneous populations. Is possible to find unicellular fungi, algae, protozoa and rotifers, that process the organic carbon source and oxygen, as well as the nitrogen and phosphorus because are vital for cell synthesis. The mixture of the substrate, in this case sludge leachate, molasses and wastewater is maintained ventilated by mechanical aeration diffusers. Considering as the biological processes work to remove dissolved material (< 45 microns), generating biomass, easily obtained by decantation processes. The design consists of an artificial support and aeration pumps, favoring develop microorganisms (denitrifying) using oxygen (O) with nitrate, resulting in nitrogen (N) in the gas phase. Thus, avoiding negative effects of the presence of ammonia or phosphorus. Overall the activated sludge system includes about 8 hours of hydraulic retention time, which does not prevent the demand for nitrification, which occurs on average in a value of MLSS 3,000 mg/L. The extended aeration works with times greater than 24 hours detention; with ratio of organic load/biomass inventory under 0.1; and average stay time (sludge age) more than 8 days. This project developed a pilot system with sludge leachate from Doña Juana landfill - RSDJ –, located in Bogota, Colombia, where they will be subjected to a process of activated sludge and extended aeration through a sequential Bach reactor - SBR, to be dump in hydric sources, avoiding ecological collapse. The system worked with a dwell time of 8 days, 30 L capacity, mainly by removing values of BOD and COD above 90%, with initial data of 1720 mg/L and 6500 mg/L respectively. Motivating the deliberate nitrification is expected to be possible commercial use diffused aeration systems for sludge leachate from landfills.

Keywords: sludge, landfill, leachate, SBR

Procedia PDF Downloads 272
192 Thulium Laser Design and Experimental Verification for NIR and MIR Nonlinear Applications in Specialty Optical Fibers

Authors: Matej Komanec, Tomas Nemecek, Dmytro Suslov, Petr Chvojka, Stanislav Zvanovec

Abstract:

Nonlinear phenomena in the near- and mid-infrared region are attracting scientific attention mainly due to the supercontinuum generation possibilities and subsequent utilizations for ultra-wideband applications like e.g. absorption spectroscopy or optical coherence tomography. Thulium-based fiber lasers provide access to high-power ultrashort pump pulses in the vicinity of 2000 nm, which can be easily exploited for various nonlinear applications. The paper presents a simulation and experimental study of a pulsed thulium laser based for near-infrared (NIR) and mid-infrared (MIR) nonlinear applications in specialty optical fibers. In the first part of the paper the thulium laser is discussed. The thulium laser is based on a gain-switched seed-laser and a series of amplification stages for obtaining output peak powers in the order of kilowatts for pulses shorter than 200 ps in full-width at half-maximum. The pulsed thulium laser is first studied in a simulation software, focusing on seed-laser properties. Afterward, a pre-amplification thulium-based stage is discussed, with the focus of low-noise signal amplification, high signal gain and eliminating pulse distortions during pulse propagation in the gain medium. Following the pre-amplification stage a second gain stage is evaluated with incorporating a thulium-fiber of shorter length with increased rare-earth dopant ratio. Last a power-booster stage is analyzed, where the peak power of kilowatts should be achieved. Examples of analytical study are further validated by the experimental campaign. The simulation model is further corrected based on real components – parameters such as real insertion-losses, cross-talks, polarization dependencies, etc. are included. The second part of the paper evaluates the utilization of nonlinear phenomena, their specific features at the vicinity of 2000 nm, compared to e.g. 1550 nm, and presents supercontinuum modelling, based on the thulium laser pulsed output. Supercontinuum generation simulation is performed and provides reasonably accurate results, once fiber dispersion profile is precisely defined and fiber nonlinearity is known, furthermore input pulse shape and peak power must be known, which is assured thanks to the experimental measurement of the studied thulium pulsed laser. The supercontinuum simulation model is put in relation to designed and characterized specialty optical fibers, which are discussed in the third part of the paper. The focus is placed on silica and mainly on non-silica fibers (fluoride, chalcogenide, lead-silicate) in their conventional, microstructured or tapered variants. Parameters such as dispersion profile and nonlinearity of exploited fibers were characterized either with an accurate model, developed in COMSOL software or by direct experimental measurement to achieve even higher precision. The paper then combines all three studied topics and presents a possible application of such a thulium pulsed laser system working with specialty optical fibers.

Keywords: nonlinear phenomena, specialty optical fibers, supercontinuum generation, thulium laser

Procedia PDF Downloads 321
191 Addressing Microbial Contamination in East Hararghe, Oromia, Ethiopia: Improving Water Sanitation Infrastructure and Promoting Safe Water Practices for Enhanced Food Safety

Authors: Tuji Jemal Ahmed, Hussen Beker Yusuf

Abstract:

Food safety is a major concern worldwide, with microbial contamination being one of the leading causes of foodborne illnesses. In Ethiopia, drinking water and untreated groundwater are a primary source of microbial contamination, leading to significant health risks. East Hararghe, Oromia, is one of the regions in Ethiopia that has been affected by this problem. This paper provides an overview of the impact of untreated groundwater on human health in Haramaya Rural District, East Hararghe and highlights the urgent need for sustained efforts to address the water sanitation supply problem. The use of untreated groundwater for drinking and household purposes in Haramaya Rural District, East Hararghe is prevalent, leading to high rates of waterborne illnesses such as diarrhea, typhoid fever, and cholera. The impact of these illnesses on human health is significant, resulting in significant morbidity and mortality, especially among vulnerable populations such as children and the elderly. In addition to the direct health impacts, waterborne illnesses also have indirect impacts on human health, such as reduced productivity and increased healthcare costs. Groundwater sources are susceptible to microbial contamination due to the infiltration of surface water, human and animal waste, and agricultural runoff. In Haramaya Rural District, East Hararghe, poor water management practices, inadequate sanitation facilities, and limited access to clean water sources contribute to the prevalence of untreated groundwater as a primary source of drinking water. These underlying causes of microbial contamination highlight the need for improved water sanitation infrastructure, including better access to safe drinking water sources and the implementation of effective treatment methods. The paper emphasizes the need for regular water quality monitoring, especially for untreated groundwater sources, to ensure safe drinking water for the population. The implementation of effective preventive measures, such as the use of effective disinfectants, proper waste disposal methods, and regular water quality monitoring, is crucial to reducing the risk of contamination and improving public health outcomes in the region. Community education and awareness-raising campaigns can also play a critical role in promoting safe water practices and reducing the risk of contamination. These campaigns can include educating the population on the importance of boiling water before drinking, the use of water filters, and proper sanitation practices. In conclusion, the use of untreated groundwater as a primary source of drinking water in East Hararghe, Oromia, Ethiopia, has significant impacts on human health, leading to widespread waterborne illnesses and posing a significant threat to public health. Sustained efforts are urgently needed to address the root causes of contamination, such as poor sanitation and hygiene practices, improper waste management, and the water sanitation supply problem, including the implementation of effective preventive measures and community-based education programs, ultimately improving public health outcomes in the region. A comprehensive approach that involves community-based water management systems, point-of-use water treatment methods, and awareness-raising campaigns can contribute to reducing the incidence of microbial contamination in the region.

Keywords: food safety, health risks, microbial contamination, untreated groundwater

Procedia PDF Downloads 114
190 Conceptualizing of Priorities in the Dynamics of Public Administration Contemporary Reforms

Authors: Larysa Novak-Kalyayeva, Aleksander Kuczabski, Orystlava Sydorchuk, Nataliia Fersman, Tatyana Zemlinskaia

Abstract:

The article presents the results of the creative analysis and comparison of trends in the development of the theory of public administration during the period from the second half of the 20th to the beginning of the 21st century. The process of conceptualization of the priorities of public administration in the dynamics of reforming was held under the influence of such factors as globalization, integration, information and technological changes and human rights is examined. The priorities of the social state in the concepts of the second half of the 20th century are studied. Peculiar approaches to determining the priorities of public administration in the countries of "Soviet dictatorship" in Central and Eastern Europe in the same period are outlined. Particular attention is paid to the priorities of public administration regarding the interaction between public power and society and the development of conceptual foundations for the modern managerial process. There is a thought that the dynamics of the formation of concepts of the European governance is characterized by the sequence of priorities: from socio-economic and moral-ethical to organizational-procedural and non-hierarchical ones. The priorities of the "welfare state" were focused on the decent level of material wellbeing of population. At the same time, the conception of "minimal state" emphasized priorities of human responsibility for their own fate under the conditions of minimal state protection. Later on, the emphasis was placed on horizontal ties and redistribution of powers and competences of "effective state" with its developed procedures and limits of responsibility at all levels of government and in close cooperation with the civil society. The priorities of the contemporary period are concentrated on human rights in the concepts of "good governance" and all the following ones, which recognize the absolute priority of public administration with compliance, provision and protection of human rights. There is a proved point of view that civilizational changes taking place under the influence of information and technological imperatives also stipulate changes in priorities, redistribution of emphases and update principles of managerial concepts on the basis of publicity, transparency, departure from traditional forms of hierarchy and control in favor of interactivity and inter-sectoral interaction, decentralization and humanization of managerial processes. The necessity to permanently carry out the reorganization, by establishing the interaction between different participants of public power and social relations, to establish a balance between political forces and social interests on the basis of mutual trust and mutual understanding determines changes of social, political, economic and humanitarian paradigms of public administration and their theoretical comprehension. The further studies of theoretical foundations of modern public administration in interdisciplinary discourse in the context of ambiguous consequences of the globalizational and integrational processes of modern European state-building would be advisable. This is especially true during the period of political transformations and economic crises which are the characteristic of the contemporary Europe, especially for democratic transition countries.

Keywords: concepts of public administration, democratic transition countries, human rights, the priorities of public administration, theory of public administration

Procedia PDF Downloads 174
189 A Long-Standing Methodology Quest Regarding Commentary of the Qur’an: Modern Debates on Function of Hermeneutics in the Quran Scholarship in Turkey

Authors: Merve Palanci

Abstract:

This paper aims to reveal and analyze methodology debates on Qur’an Commentary in Turkish Scholarship and to make sound inductions on the current situation, with reference to the literature evolving around the credibility of Hermeneutics when the case is Qur’an commentary and methodological connotations related to it, together with the other modern approaches to the Qur’an. It is fair to say that Tafseer, constituting one of the main parts of basic Islamic sciences, has drawn great attention from both Muslim and non-Muslim scholars for a long time. And with the emplacement of an acute junction between natural sciences and social sciences in the post-enlightenment period, this interest seems to pave the way for methodology discussions that are conducted by theology spheres, occupying a noticeable slot in Tafseer literature, as well. A panoramic glance at the classical treatise in relation to the methodology of Tafseer, namely Usul al-Tafseer, leads the reader to the conclusion that these classics are intrinsically aimed at introducing the Qur’an and its early history of formation as a corpus and providing a better understanding of its content. To illustrate, the earliest methodology work extant for Qur’an commentary, al- Aql wa’l Fahm al- Qur’an by Harith al-Muhasibi covers content that deals with Qur’an’s rhetoric, its muhkam and mutashabih, and abrogation, etc. And most of the themes in question are evident to share a common ground: understanding the Scripture and producing an accurate commentary to be built on this preliminary phenomenon of understanding. The content of other renowned works in an overtone of Tafseer methodology, such as Funun al Afnan, al- Iqsir fi Ilm al- Tafseer, and other succeeding ones al- Itqan and al- Burhan is also rich in hints related to preliminary phenomena of understanding. However, these works are not eligible for being classified as full-fledged methodology manuals assuring a true understanding of the Qur’an. And Hermeneutics is believed to supply substantial data applicable to Qur’an commentary as it deals with the nature of understanding itself. Referring to the latest tendencies in Tafseer methodology, this paper envisages to centralize hermeneutical debates in modern scholarship of Qur’an commentary and the incentives that lead scholars to apply for Hermeneutics in Tafseer literature. Inspired from these incentives, the study involves three parts. In the introduction part, this paper introduces key features of classical methodology works in general terms and traces back the main methodological shifts of modern times in Qur’an commentary. To this end, revisionist Ecole, scientific Qur’an commentary ventures, and thematic Qur’an commentary are included and analysed briefly. However, historical-critical commentary on the Quran, as it bears a close relationship with hermeneutics, is handled predominantly. The second part is based on the hermeneutical nature of understanding the Scripture, revealing a timeline for the beginning of hermeneutics debates in Tafseer, and Fazlur Rahman’s(d.1988) influence will be manifested for establishing a theoretical bridge. In the following part, reactions against the application of Hermeneutics in Tafseer activity and pro-hermeneutics works will be revealed through cross-references to the prominent figures of both, and the literature in question in theology scholarship in Turkey will be explored critically.

Keywords: hermeneutics, Tafseer, methodology, Ulum al- Qur’an, modernity

Procedia PDF Downloads 75
188 Higher Education in India Strength, Weakness, Opportunities and Threats

Authors: Renu Satish Nair

Abstract:

Indian higher education system is the third largest in the world next to United States and China. India is experiencing a rapid growth in higher education in terms of student enrollment as well as establishment of new universities, colleges and institutes of national importance. Presently about 22 million students are being enrolled in higher education and more than 46 thousand institutions’ are functioning as centers of higher education. Indian government plays a 'command and control' role in higher education. The main governing body is University Grants Commission, which enforces its standards, advises the government, and helps coordinate between the centre and the state. Accreditation of higher learning is over seen by 12 autonomous institutions established by the University Grants Commission. The present paper is an effort to analyze the strength, weakness, opportunities and threat (SWOT Analysis) of Indian Higher education system. The higher education in India is progressing ahead by virtue of its strength which is being recognized at global level. Several institutions of India, such as Indian Institutes of Technology (IITs), Indian Institutes of Management (IIMs) and National Institutes of Technology (NITs) have been globally acclaimed for their standard of education. Three Indian universities were listed in the Times Higher Education list of the world’s top 200 universities i.e. Indian Institutes of Technology, Indian Institute of Management and Jawahar Lal Nehru University in 2005 and 2006. Six Indian Institutes of Technology and the Birla Institute of Technology and Science - Pilani were listed among the top 20 science and technology schools in Asia by the Asia Week. The school of Business situated in Hyderabad was ranked number 12 in Globe MBA ranking by the Financial Times of London in 2010 while the All India Institute of Medical Sciences has been recognized as a global leader in medical research and treatment. But at the same time, because of vast expansion, the system bears several weaknesses. The Indian higher education system in many parts of the country is in the state of disrepair. In almost half the districts in the country higher education enrollment are very low. Almost two third of total universities and 90% of colleges are rated below average on quality parameters. This can be attributed to the under prepared faculty, unwieldy governance and other obstacles to innovation and improvement that could prohibit India from meeting its national education goals. The opportunities in Indian higher education system are widely ranged. The national institutions are training their products to compete at global level and make them capable to grab opportunities worldwide. The state universities and colleges with their limited resources are giving the products that are capable enough to secure career opportunities and hold responsible positions in various government and private sectors with in the country. This is further creating opportunities for the weaker section of the society to join the main stream. There are several factors which can be defined as threats to Indian higher education system. It is a matter of great concern and needs proper attention. Some important factors are -Conservative society, particularly for women education; -Lack of transparency, -Taking higher education as a means of business

Keywords: Indian higher education system, SWOT analysis, university grants commission, Indian institutes of technology

Procedia PDF Downloads 898
187 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)

Authors: Ahmad Kayvani Fard, Yehia Manawi

Abstract:

Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.

Keywords: membrane distillation, waste heat, seawater desalination, membrane, freshwater, direct contact membrane distillation

Procedia PDF Downloads 227
186 Digital Image Correlation Based Mechanical Response Characterization of Thin-Walled Composite Cylindrical Shells

Authors: Sthanu Mahadev, Wen Chan, Melanie Lim

Abstract:

Anisotropy dominated continuous-fiber composite materials have garnered attention in numerous mechanical and aerospace structural applications. Tailored mechanical properties in advanced composites can exhibit superiority in terms of stiffness-to-weight ratio, strength-to-weight ratio, low-density characteristics, coupled with significant improvements in fatigue resistance as opposed to metal structure counterparts. Extensive research has demonstrated their core potential as more than just mere lightweight substitutes to conventional materials. Prior work done by Mahadev and Chan focused on formulating a modified composite shell theory based prognosis methodology for investigating the structural response of thin-walled circular cylindrical shell type composite configurations under in-plane mechanical loads respectively. The prime motivation to develop this theory stemmed from its capability to generate simple yet accurate closed-form analytical results that can efficiently characterize circular composite shell construction. It showcased the development of a novel mathematical framework to analytically identify the location of the centroid for thin-walled, open cross-section, curved composite shells that were characterized by circumferential arc angle, thickness-to-mean radius ratio, and total laminate thickness. Ply stress variations for curved cylindrical shells were analytically examined under the application of centric tensile and bending loading. This work presents a cost-effective, small-platform experimental methodology by taking advantage of the full-field measurement capability of digital image correlation (DIC) for an accurate assessment of key mechanical parameters such as in-plane mechanical stresses and strains, centroid location etc. Mechanical property measurement of advanced composite materials can become challenging due to their anisotropy and complex failure mechanisms. Full-field displacement measurements are well suited for characterizing the mechanical properties of composite materials because of the complexity of their deformation. This work encompasses the fabrication of a set of curved cylindrical shell coupons, the design and development of a novel test-fixture design and an innovative experimental methodology that demonstrates the capability to very accurately predict the location of centroid in such curved composite cylindrical strips via employing a DIC based strain measurement technique. Error percentage difference between experimental centroid measurements and previously estimated analytical centroid results are observed to be in good agreement. The developed analytical modified-shell theory provides the capability to understand the fundamental behavior of thin-walled cylindrical shells and offers the potential to generate novel avenues to understand the physics of such structures at a laminate level.

Keywords: anisotropy, composites, curved cylindrical shells, digital image correlation

Procedia PDF Downloads 316
185 Negative Perceptions of Ageing Predicts Greater Dysfunctional Sleep Related Cognition Among Adults Aged 60+

Authors: Serena Salvi

Abstract:

Ageistic stereotypes and practices have become a normal and therefore pervasive phenomenon in various aspects of everyday life. Over the past years, renewed awareness towards self-directed age stereotyping in older adults has given rise to a line of research focused on the potential role of attitudes towards ageing on seniors’ health and functioning. This set of studies has showed how a negative internalisation of ageistic stereotypes would discourage older adults in seeking medical advice, in addition to be associated to negative subjective health evaluation. An important dimension of mental health that is often affected in older adults is represented by sleep quality. Self-reported sleep quality among older adults has shown to be often unreliable when compared to their objective sleep measures. Investigations focused on self-reported sleep quality among older adults have suggested how this portion of the population would tend to accept disrupted sleep if believed to be up to standard for their age. On the other hand, unrealistic expectations, and dysfunctional beliefs towards sleep in ageing, might prompt older adults to report sleep disruption even in the absence of objective disrupted sleep. Objective of this study is to examine an association between personal attitudes towards ageing in adults aged 60+ and dysfunctional sleep related cognition. More in detail, this study aims to investigate a potential association between personal attitudes towards ageing, sleep locus of control and dysfunctional beliefs towards sleep among this portion of the population. Data in this study were statistically analysed in SPSS software. Participants were recruited through the online participants recruitment system Prolific. Inclusion of attention check questions throughout the questionnaire and consistency of responses were looked at. Prior to the commencement of this study, Ethical Approval was granted (ref. 39396). Descriptive statistics were used to determine the frequency, mean, and SDs of the variables. Pearson coefficient was used for interval variables, independent T-test for comparing means between two independent groups, analysis of variance (ANOVA) test for comparing the means in several independent groups, and hierarchical linear regression models for predicting criterion variables based on predictor variables. In this study self-perceptions of ageing were assessed using APQ-B’s subscales, while dysfunctional sleep related cognition was operationalised using the SLOC and the DBAS16 scales. Of the final subscales taken in consideration in the brief version of the APQ questionnaire, Emotional Representations (ER), Control Positive (PC) and Control and Consequences Negative (NC) have shown to be of particularly relevance for the remits of this study. Regression analysis show how an increase in the APQ-B subscale Emotional Representations (ER) predicts an increase in dysfunctional beliefs and attitudes towards sleep in this sample, after controlling for subjective sleep quality, level of depression and chronological age. A second regression analysis showed that APQ-B subscales Control Positive (PC) and Control and Consequences Negative (NC) were significant predictors in the change of variance of SLOC, after controlling for subjective sleep quality, level of depression and dysfunctional beliefs about sleep.

Keywords: sleep-related cognition, perceptions of aging, older adults, sleep quality

Procedia PDF Downloads 103
184 Effect of Non-Thermal Plasma, Chitosan and Polymyxin B on Quorum Sensing Activity and Biofilm of Pseudomonas aeruginosa

Authors: Alena Cejkova, Martina Paldrychova, Jana Michailidu, Olga Matatkova, Jan Masak

Abstract:

Increasing the resistance of pathogenic microorganisms to many antibiotics is a serious threat to the treatment of infectious diseases and cleaning medical instruments. It should be added that the resistance of microbial populations growing in biofilms is often up to 1000 times higher compared to planktonic cells. Biofilm formation in a number of microorganisms is largely influenced by the quorum sensing regulatory mechanism. Finding external factors such as natural substances or physical processes that can interfere effectively with quorum sensing signal molecules should reduce the ability of the cell population to form biofilm and increase the effectiveness of antibiotics. The present work is devoted to the effect of chitosan as a representative of natural substances with anti-biofilm activity and non- thermal plasma (NTP) alone or in combination with polymyxin B on biofilm formation of Pseudomonas aeruginosa. Particular attention was paid to the influence of these agents on the level of quorum sensing signal molecules (acyl-homoserine lactones) during planktonic and biofilm cultivations. Opportunistic pathogenic strains of Pseudomonas aeruginosa (DBM 3081, DBM 3777, ATCC 10145, ATCC 15442) were used as model microorganisms. Cultivations of planktonic and biofilm populations in 96-well microtiter plates on horizontal shaker were used for determination of antibiotic and anti-biofilm activity of chitosan and polymyxin B. Biofilm-growing cells on titanium alloy, which is used for preparation of joint replacement, were exposed to non-thermal plasma generated by cometary corona with a metallic grid for 15 and 30 minutes. Cultivation followed in fresh LB medium with or without chitosan or polymyxin B for next 24 h. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Activity of N-acyl homoserine lactones (AHLs) compounds involved in the regulation of biofilm formation was determined using Agrobacterium tumefaciens strain harboring a traG::lacZ/traR reporter gene responsive to AHLs. The experiments showed that both chitosan and non-thermal plasma reduce the AHLs level and thus the biofilm formation and stability. The effectiveness of both agents was somewhat strain dependent. During the eradication of P. aeruginosa DBM 3081 biofilm on titanium alloy induced by chitosan (45 mg / l) there was an 80% decrease in AHLs. Applying chitosan or NTP on the P. aeruginosa DBM 3777 biofilm did not cause a significant decrease in AHLs, however, in combination with both (chitosan 55 mg / l and NTP 30 min), resulted in a 70% decrease in AHLs. Combined application of NTP and polymyxin B allowed reduce antibiotic concentration to achieve the same level of AHLs inhibition in P. aeruginosa ATCC 15442. The results shown that non-thermal plasma and chitosan have considerable potential for the eradication of highly resistant P. aeruginosa biofilms, for example on medical instruments or joint implants.

Keywords: anti-biofilm activity, chitosan, non-thermal plasma, opportunistic pathogens

Procedia PDF Downloads 200
183 Person-Centered Thinking as a Fundamental Approach to Improve Quality of Life

Authors: Christiane H. Kellner, Sarah Reker

Abstract:

The UN-Convention on the Rights of Persons with Disabilities, which Germany also ratified, postulates the necessity of user-centred design, especially when it comes to evaluating the individual needs and wishes of all citizens. Therefore, a multidimensional approach is required. Based on this insight, the structure of the town-like centre in Schönbrunn - a large residential complex and service provider for persons with disabilities in the outskirts of Munich - will be remodelled to open up the community to all people as well as transform social space. This strategy should lead to more equal opportunities and open the way for a much more diverse community. The research project “Index for participation development and quality of life for persons with disabilities” (TeLe-Index, 2014-2016), which is anchored at the Technische Universität München in Munich and at the Franziskuswerk Schönbrunn supports this transformation process called “Vision 2030”. In this context, we have provided academic supervision and support for three projects (the construction of a new school, inclusive housing for children and teenagers with disabilities and the professionalization of employees using person-centred planning). Since we cannot present all the issues of the umbrella-project within the conference framework, we will be focusing on one sub-project more in-depth, namely “The Person-Centred Think Tank” [Arbeitskreis Personenzentriertes Denken; PZD]. In the context of person-centred thinking (PCT), persons with disabilities are encouraged to (re)gain or retain control of their lives through the development of new choice options and the validation of individual lifestyles. PCT should thus foster and support both participation and quality of life. The project aims to establish PCT as a fundamental approach for both employees and persons with disabilities in the institution through in-house training for the staff and, subsequently, training for users. Hence, for the academic support and supervision team, the questions arising from this venture can be summed up as follows: (1) has PCT already gained a foothold at the Franziskuswerk Schönbrunn? And (2) how does it affect the interaction with persons with disabilities and how does it influence the latter’s everyday life? According to the holistic approach described above, the target groups for this study are both the staff and the users of the institution. Initially, we planned to implement the group discussion method for both target-groups. However, in the course of a pretest with persons with intellectual disabilities, it became clear that this type of interview, with hardly any external structuring, provided only limited feedback. In contrast, when the discussions were moderated, there was more interaction and dialogue between the interlocutors. Therefore, for this target-group, we introduced structured group interviews. The insights we have obtained until now will enable us to present the intermediary results of our evaluation. We analysed and evaluated the group interviews and discussions with the help of qualitative content analysis according to Mayring in order to obtain information about users’ quality of life. We sorted out the statements relating to quality of life obtained during the group interviews into three dimensions: subjective wellbeing, self-determination and participation. Nevertheless, the majority of statements were related to subjective wellbeing and self-determination. Thus, especially the limited feedback on participation clearly demonstrates that the lives of most users do not take place beyond the confines of the institution. A number of statements highlighted the fact that PCT is anchored in the everyday interactions within the groups. However, the implementation and fostering of PCT on a broader level could not be detected and thus remain further aims of the project. The additional interviews we have planned should validate the results obtained until now and open up new perspectives.

Keywords: person-centered thinking, research with persons with disabilities, residential complex and service provider, participation, self-determination.

Procedia PDF Downloads 323
182 The Routes of Human Suffering: How Point-Source and Destination-Source Mapping Can Help Victim Services Providers and Law Enforcement Agencies Effectively Combat Human Trafficking

Authors: Benjamin Thomas Greer, Grace Cotulla, Mandy Johnson

Abstract:

Human trafficking is one of the fastest growing international crimes and human rights violations in the world. The United States Department of State (State Department) approximates some 800,000 to 900,000 people are annually trafficked across sovereign borders, with approximately 14,000 to 17,500 of these people coming into the United States. Today’s slavery is conducted by unscrupulous individuals who are often connected to organized criminal enterprises and transnational gangs, extracting huge monetary sums. According to the International Labour Organization (ILO), human traffickers collect approximately $32 billion worldwide annually. Surpassed only by narcotics dealing, trafficking of humans is tied with illegal arms sales as the second largest criminal industry in the world and is the fastest growing field in the 21st century. Perpetrators of this heinous crime abound. They are not limited to single or “sole practitioners” of human trafficking, but rather, often include Transnational Criminal Organizations (TCO), domestic street gangs, labor contractors, and otherwise seemingly ordinary citizens. Monetary gain is being elevated over territorial disputes and street gangs are increasingly operating in a collaborative effort with TCOs to further disguise their criminal activity; to utilizing their vast networks, in an attempt to avoid detection. Traffickers rely on a network of clandestine routes to sell their commodities with impunity. As law enforcement agencies seek to retard the expansion of transnational criminal organization’s entry into human trafficking, it is imperative that they develop reliable trafficking mapping of known exploitative routes. In a recent report given to the Mexican Congress, The Procuraduría General de la República (PGR) disclosed, from 2008 to 2010 they had identified at least 47 unique criminal networking routes used to traffic victims and that Mexico’s estimated domestic victims number between 800,000 adults and 20,000 children annually. Designing a reliable mapping system is a crucial step to effective law enforcement response and deploying a successful victim support system. Creating this mapping analytic is exceedingly difficult. Traffickers are constantly changing the way they traffic and exploit their victims. They swiftly adapt to local environmental factors and react remarkably well to market demands, exploiting limitations in the prevailing laws. This article will highlight how human trafficking has become one of the fastest growing and most high profile human rights violations in the world today; compile current efforts to map and illustrate trafficking routes; and will demonstrate how the proprietary analytical mapping analysis of point-source and destination-source mapping can help local law enforcement, governmental agencies and victim services providers effectively respond to the type and nature of trafficking to their specific geographical locale. Trafficking transcends state and international borders. It demands an effective and consistent cooperation between local, state, and federal authorities. Each region of the world has different impact factors which create distinct challenges for law enforcement and victim services. Our mapping system lays the groundwork for a targeted anti-trafficking response.

Keywords: human trafficking, mapping, routes, law enforcement intelligence

Procedia PDF Downloads 381
181 Impact of Climate Change on Crop Production: Climate Resilient Agriculture Is the Need of the Hour

Authors: Deepak Loura

Abstract:

Climate change is considered one of the major environmental problems of the 21st century and a lasting change in the statistical distribution of weather patterns over periods ranging from decades to millions of years. Agriculture and climate change are internally correlated with each other in various aspects, as the threat of varying global climate has greatly driven the attention of scientists, as these variations are imparting a negative impact on global crop production and compromising food security worldwide. The fast pace of development and industrialization and indiscriminate destruction of the natural environment, more so in the last century, have altered the concentration of atmospheric gases that lead to global warming. Carbon dioxide (CO₂), methane (CH₄), and nitrous oxide (NO) are important biogenic greenhouse gases (GHGs) from the agricultural sector contributing to global warming and their concentration is increasing alarmingly. Agricultural productivity can be affected by climate change in 2 ways: first, directly, by affecting plant growth development and yield due to changes in rainfall/precipitation and temperature and/or CO₂ levels, and second, indirectly, there may be considerable impact on agricultural land use due to snow melt, availability of irrigation, frequency and intensity of inter- and intra-seasonal droughts and floods, soil organic matter transformations, soil erosion, distribution and frequency of infestation by insect pests, diseases or weeds, the decline in arable areas (due to submergence of coastal lands), and availability of energy. An increase in atmospheric CO₂ promotes the growth and productivity of C3 plants. On the other hand, an increase in temperature, can reduce crop duration, increase crop respiration rates, affect the equilibrium between crops and pests, hasten nutrient mineralization in soils, decrease fertilizer- use efficiencies, and increase evapotranspiration among others. All these could considerably affect crop yield in long run. Climate resilient agriculture consisting of adaptation, mitigation, and other agriculture practices can potentially enhance the capacity of the system to withstand climate-related disturbances by resisting damage and recovering quickly. Climate resilient agriculture turns the climate change threats that have to be tackled into new business opportunities for the sector in different regions and therefore provides a triple win: mitigation, adaptation, and economic growth. Improving the soil organic carbon stock of soil is integral to any strategy towards adapting to and mitigating the abrupt climate change, advancing food security, and improving the environment. Soil carbon sequestration is one of the major mitigation strategies to achieve climate-resilient agriculture. Climate-smart agriculture is the only way to lower the negative impact of climate variations on crop adaptation before it might affect global crop production drastically. To cope with these extreme changes, future development needs to make adjustments in technology, management practices, and legislation. Adaptation and mitigation are twin approaches to bringing resilience to climate change in agriculture.

Keywords: climate change, global warming, crop production, climate resilient agriculture

Procedia PDF Downloads 74
180 Apple in the Big Tech Oligopoly: An Analysis of Disruptive Innovation Trends and Their Influence on the Capacity of Conserving a Positive Social Impact as Primary Purpose

Authors: E. Loffi Borghese

Abstract:

In this comprehensive study, we delve into the intricate dynamics of the big tech oligopoly, focusing particularly on Apple as a case study. The core objective is to scrutinize the evolving relationship between a firm's commitment to positive social impact as its primary purpose and its resilience in the face of disruptive innovations within the big tech market. Our exploration begins with a theoretical framework, emphasizing the significance of distinguishing between corporate social responsibility and social impact as a primary purpose. Drawing on insights from Drumwright and Bartkus and Glassman, we underscore the transformative potential when a firm aligns its core business with a social mission, transcending mere side activities. Examining successful firms, such as Apple, we adopt Sinek's perspective on inspirational leadership and the "golden circle." This framework sheds light on why some organizations, like Apple, succeed in making positive social impact their primary purpose. Apple's early-stage life cycle is dissected, revealing a profound commitment to challenging the status quo and promoting simpler alternatives that resonate with its users' lives. The study then navigates through industry life cycles, drawing on Klepper's stages and Christensen's disruptive innovations. Apple's dominance in the big tech oligopoly is contrasted with companies like Harley Davidson and Polaroid, illustrating the consequences of failing to adapt to disruptive innovations. The data and methods employed encompass a qualitative approach, leveraging sources like ECB, Forbes, World in Data, and scientific articles. A secondary data analysis probes Apple's market evolution within the big tech oligopoly, emphasizing the shifts in market context and innovation trends that demand strategic adaptations. The subsequent sections scrutinize Apple's present innovation strategies, highlighting its diversified product portfolio and intensified focus on big data. We examine the implications of these shifts on Apple's capacity to maintain positive social impact as its primary purpose, pondering potential consequences on its brand perception. The study culminates in a reflection on the broader implications of the big tech oligopoly's dominance. It contemplates the diminishing competitiveness in the market and the potential sidelining of positive social impact as a competitive advantage. The expansion of tech firms into diverse sectors raises concerns about negative societal impacts, prompting a call for increased regulatory attention and awareness. In conclusion, this research serves as a catalyst for heightened awareness and discussion on the intricate interplay between firms' social impact goals, disruptive innovations, and the broader societal implications within the evolving landscape of the big tech oligopoly. Despite limitations, this study aims to stimulate further research, urging a conscious and responsible approach to shaping the future economic system.

Keywords: innovation trends, market dynamics, social impact, tech oligopoly

Procedia PDF Downloads 74
179 Mastopexy with the "Dermoglandular Autоaugmentation" Method. Increased Stability of the Result. Personalized Technique

Authors: Maksim Barsakov

Abstract:

Introduction. In modern plastic surgery, there are a large number of breast lift techniques.Due to the spreading information about the "side effects" of silicone implants, interest in implant-free mastopexy is increasing year after year. However, despite the variety of techniques, patients sometimes do not get full satisfaction from the results of mastopexy because of the unexpressed filling of the upper pole, extended anchoring postoperative scars and sometimes because of obtaining an aesthetically unattractive breast shape. The stability of the result after mastopexy depends on many factors, including postoperative rehabilitation. Stability of weight and hormonal background, stretchability of tissues. The high recurrence rate of ptosis and short-term aesthetic effect of mastopexy indicate the urgency of improving surgical techniques and increasing the stabilization of breast tissue. Purpose of the study. To develop and introduce into practice a technique of mastopexy based on the use of a modified Ribeiro flap, as well as elements of tissue movement and fixation designed to increase the stability of postoperative mastopexy. In addition, to give indications for the application of this surgical technique. Materials and Methods. it operated on 103 patients aged 18 to 53 years from 2019 to 2023 according to the reported method. These were patients with primary mastopexy, secondary mastopexy, and also patient with implant removal and one-stage mastopexy. The patients were followed up for 12 months to assess the stability of the result. Results and their discussion. Observing the patients, we noted greater stability of the breast shape and upper pole filling compared to the conventional classical methods. We did not have to resort to anchoring scars. In 90 percent of cases, a inverted T-shape scar was used. In 10 percent, the J-scar was used. The quantitative distribution of complications identified among the operated patients is as follows: worsened healing of the junction of vertical and horizontal sutures at the period of 1-1.5 months after surgery - 15 patients; at treatment with ointment method healing was observed in 7-30 days; permanent loss of NAC sensitivity - 0 patients; vascular disorders in the area of NAC/areola necrosis - 0 patients; marginal necrosis of the areola-2 patients. independent healing within 3-4 weeks without aesthetic defects. Aesthetically unacceptable mature scars-3 patients; partial liponecrosis of the autoflap unilaterally - 1 patient. recurrence of ptosis - 1 patient (after weight loss of 12 kg). In the late postoperative period, 2 patients became pregnant, gave birth, and no lactation problems were observed. Conclusion. Thus, in the world of plastic surgery methods of breast lift continue to improve, which is especially relevant in modern times, due to the increased attention to this operation. The author's proposed method of mastopexy with glandular autoflap allows obtaining in most cases a stable result, a fuller breast shape, avoiding the presence of extended anchoring scars, and also preserves the possibility of lactation. The author of this article has obtained a patent for invention for this method of mastopexy.

Keywords: mastopexy, mammoplasty, autoflap, personal technique

Procedia PDF Downloads 39
178 Innovation in PhD Training in the Interdisciplinary Research Institute

Authors: B. Shaw, K. Doherty

Abstract:

The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.

Keywords: interdisciplinary, method, research student, training

Procedia PDF Downloads 206
177 Nano-Enabling Technical Carbon Fabrics to Achieve Improved Through Thickness Electrical Conductivity in Carbon Fiber Reinforced Composites

Authors: Angelos Evangelou, Katerina Loizou, Loukas Koutsokeras, Orestes Marangos, Giorgos Constantinides, Stylianos Yiatros, Katerina Sofocleous, Vasileios Drakonakis

Abstract:

Owing to their outstanding strength to weight properties, carbon fiber reinforced polymer (CFRPs) composites have attracted significant attention finding use in various fields (sports, automotive, transportation, etc.). The current momentum indicates that there is an increasing demand for their employment in high value bespoke applications such as avionics and electronic casings, damage sensing structures, EMI (electromagnetic interference) structures that dictate the use of materials with increased electrical conductivity both in-plane and through the thickness. Several efforts by research groups have focused on enhancing the through-thickness electrical conductivity of FRPs, in an attempt to combine the intrinsically high relative strengths exhibited with improved z-axis electrical response as well. However, only a limited number of studies deal with printing of nano-enhanced polymer inks to produce a pattern on dry fabric level that could be used to fabricate CFRPs with improved through thickness electrical conductivity. The present study investigates the employment of screen-printing process on technical dry fabrics using nano-reinforced polymer-based inks to achieve the required through thickness conductivity, opening new pathways for the application of fiber reinforced composites in niche products. Commercially available inks and in-house prepared inks reinforced with electrically conductive nanoparticles are employed, printed in different patterns. The aim of the present study is to investigate both the effect of the nanoparticle concentration as well as the droplet patterns (diameter, inter-droplet distance and coverage) to optimize printing for the desired level of conductivity enhancement in the lamina level. The electrical conductivity is measured initially at ink level to pinpoint the optimum concentrations to be employed using a “four-probe” configuration. Upon printing of the different patterns, the coverage of the dry fabric area is assessed along with the permeability of the resulting dry fabrics, in alignment with the fabrication of CFRPs that requires adequate wetting by the epoxy matrix. Results demonstrated increased electrical conductivities of the printed droplets, with increase of the conductivity from the benchmark value of 0.1 S/M to between 8 and 10 S/m. Printability of dense and dispersed patterns has exhibited promising results in terms of increasing the z-axis conductivity without inhibiting the penetration of the epoxy matrix at the processing stage of fiber reinforced composites. The high value and niche prospect of the resulting applications that can stem from CFRPs with increased through thickness electrical conductivities highlights the potential of the presented endeavor, signifying screen printing as the process to to nano-enable z-axis electrical conductivity in composite laminas. This work was co-funded by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation (Project: ENTERPRISES/0618/0013).

Keywords: CFRPs, conductivity, nano-reinforcement, screen-printing

Procedia PDF Downloads 151
176 Growth Mechanism and Sensing Behaviour of Sn Doped ZnO Nanoprisms Prepared by Thermal Evaporation Technique

Authors: Sudip Kumar Sinha, Saptarshi Ghosh

Abstract:

While there’s a perpetual buzz around zinc oxide (ZnO) superstructures for their unique optical features, the versatile material has been constantly utilized to manifest tailored electronic properties through rendition of distinct morphologies. And yet, the unorthodox approach of implementing the novel 1D nanostructures of ZnO (pristine or doped) for volatile sensing applications has ample scope to accommodate new unconventional morphologies. In the last two decades, solid-state sensors have attracted much curiosity for their relevance in identifying pollutant, toxic and other industrial gases. In particular gas sensors based on metal oxide semiconducting (wide Eg) nanomaterials have recently attracted intensive attention owing to their high sensitivity and fast response and recovery time. These materials when exposed to air, the atmospheric O2 dissociates and get absorb on the surface of the sensors by trapping the outermost shell electrons. Finally a depleted zone on the surface of the sensors is formed, that enhances the potential barrier height at grain boundary . Once a target gas is exposed to the sensor, the chemical interaction between the chemisorbed oxygen and the specific gas liberates the trapped electrons. Therefore altering the amount of adsorbate is a considerable approach to improve the sensitivity of any target gas/vapour molecule. Likewise, this study presents a spontaneous but self catalytic creation of Sn-doped ZnO hexagonal nanoprisms on Si (100) substrates through thermal evaporation-condensation method, and their subsequent deployment for volatile sensing. In particular, the sensors were utilized to detect molecules of ethanol, acetone and ammonia below their permissible exposure limits which returned sensitivities of around 85%, 80% and 50% respectively. The influence of Sn concentration on the growth, microstructural and optical properties of the nanoprisms along with its role in augmenting the sensing parameters has been detailed. The single-crystalline nanostructures have a typical diameter ranging from 300 to 500 nm and a length that extends up to few micrometers. HRTEM images confirmed the hexagonal crystallography for the nanoprisms, while SAED pattern asserted the single crystalline nature. The growth habit is along the low index <0001>directions. It has been seen that the growth mechanism of the as-deposited nanostructures are directly influenced by varying supersaturation ratio, fairly high substrate temperatures, and specified surface defects in certain crystallographic planes, all acting cooperatively decide the final product morphology. Room temperature photoluminescence (PL) spectra of this rod like structures exhibits a weak ultraviolet (UV) emission peak at around 380 nm and a broad green emission peak in the 505 nm regime. An estimate of the sensing parameters against dispensed target molecules highlighted the potential for the nanoprisms as an effective volatile sensing material. The Sn-doped ZnO nanostructures with unique prismatic morphology may find important applications in various chemical sensors as well as other potential nanodevices.

Keywords: gas sensor, HRTEM, photoluminescence, ultraviolet, zinc oxide

Procedia PDF Downloads 240
175 Integration of an Evidence-Based Medicine Curriculum into Physician Assistant Education: Teaching for Today and the Future

Authors: Martina I. Reinhold, Theresa Bacon-Baguley

Abstract:

Background: Medical knowledge continuously evolves and to help health care providers to stay up-to-date, evidence-based medicine (EBM) has emerged as a model. The practice of EBM requires new skills of the health care provider, including directed literature searches, the critical evaluation of research studies, and the direct application of the findings to patient care. This paper describes the integration and evaluation of an evidence-based medicine course sequence into a Physician Assistant curriculum. This course sequence teaches students to manage and use the best clinical research evidence to competently practice medicine. A survey was developed to assess the outcomes of the EBM course sequence. Methodology: The cornerstone of the three-semester sequence of EBM are interactive small group discussions that are designed to introduce students to the most clinically applicable skills to identify, manage and use the best clinical research evidence to improve the health of their patients. During the three-semester sequence, the students are assigned each semester to participate in small group discussions that are facilitated by faculty with varying background and expertise. Prior to the start of the first EBM course in the winter semester, PA students complete a knowledge-based survey that was developed by the authors to assess the effectiveness of the course series. The survey consists of 53 Likert scale questions that address the nine objectives for the course series. At the end of the three semester course series, the same survey was given to all students in the program and the results from before, and after the sequence of EBM courses are compared. Specific attention is paid to overall performance of students in the nine course objectives. Results: We find that students from the Class of 2016 and 2017 consistently improve (as measured by percent correct responses on the survey tool) after the EBM course series (Class of 2016: Pre- 62% Post- 75%; Class of 2017: Pre- 61 % Post-70%). The biggest increase in knowledge was observed in the areas of finding and evaluating the evidence, with asking concise clinical questions (Class of 2016: Pre- 61% Post- 81%; Class of 2017: Pre- 61 % Post-75%) and searching the medical database (Class of 2016: Pre- 24% Post- 65%; Class of 2017: Pre- 35 % Post-66 %). Questions requiring students to analyze, evaluate and report on the available clinical evidence regarding diagnosis showed improvement, but to a lesser extend (Class of 2016: Pre- 56% Post- 77%; Class of 2017: Pre- 56 % Post-61%). Conclusions: Outcomes identified that students did gain skills which will allow them to apply EBM principles. In addition, the outcomes of the knowledge-based survey allowed the faculty to focus on areas needing improvement, specifically the translation of best evidence into patient care. To address this area, the clinical faculty developed case scenarios that were incorporated into the lecture and discussion sessions, allowing students to better connect the research studies with patient care. Students commented that ‘class discussion and case examples’ contributed most to their learning and that ‘it was helpful to learn how to develop research questions and how to analyze studies and their significance to a potential client’. As evident by the outcomes, the EBM courses achieved the goals of the course and were well received by the students. 

Keywords: evidence-based medicine, clinical education, assessment tool, physician assistant

Procedia PDF Downloads 125
174 The Theme 'Leyli and Majnun', the Ancient Legend of the East in the Cognominal Symphonic Poem of Great Composer Gara Garayev on Specific and Non–Specific Content

Authors: Vusala Amirbayova

Abstract:

The science of modern musicology, based on the achievements of a number of neighboring science fields, has more deeply penetrated into the sphere of artistic content of the art of music and developed a new scientific methodology, methods and approaches for a comprehensive study of the problem. In this regard, a new theory developed by the famous Russian musician-scientist, professor V. Kholopova – the specific and non – specific content of music – draws the attention with its different philosophical foundation and covering historical periods of the art of composing. The scientist related her theory to the art of European composer’s creativity, and did not include musical professionalism and especially, folklore creativity existing in other continent in her circle of interest. The researcher made an effort to explain triad (the world of ideas, emotions and subjects) which is included in the general content of music in the example of composers’ works belonging to different periods and cultures. In this respect, the artistic content of works has been deeply and comprehensively analyzed new philosophical basis. The theme ‘Leyli and Majnun’ was developed by many poets as one of the ancient legends of the East, and each artist was able to give a unique artistic interpretation of the work. This literary source was successfully developed in cognominal opera of great U. Hajibeyli in Azerbaijani music and its embodiment with symphonic means required great skill and courage from Gara Garayev. Unlike opera, as there is the opportunity to show the plot of ‘Leyli and Majnun’ in the symphonic poem, the composer achieved to reflect the main purpose of its idea convincingly with pure musical means, and created a great work with tragic spirit having a great emotional impact. Though the artistic content and form of ‘Leyli and Majnun’ symphonic poem have been sufficiently analyzed by music theorists until now, in our opinion, it is for the first time that the work is considered from the point of specific music content. Therefore, we will make an effort to penetrate into a specific layer of its artistic content after firstly reviewing the poem with traditional methods in the general plan. The use of both national fret – intonations and the system of major – minor by G. Garayev is based on well-tempered root. The composer, widely using national fret – intonations and model harmonic means on this ground, achieved to express the spirit and content of the poem. It perfectly embodies the grandeur and immortality of divine love, and the struggle of powerful human personality with the forces of despotism. Gara Garayev said about this work: “My most sublime goal and desire is to explain the literary issue that love endures to all obstacles and overcomes even death”. The music of ‘Leyli and Majnun’ symphonic poem is rich with deep desires and sharp contradictions. G.Garayev reflected these wonderful ideas about the power of music in his book ‘Articles, schools and sayings’: “Music is the decoration of life and a powerful source of inspiration”.

Keywords: content, music, symphonic, theory

Procedia PDF Downloads 268
173 The Impact of a Simulated Teaching Intervention on Preservice Teachers’ Sense of Professional Identity

Authors: Jade V. Rushby, Tony Loughland, Tracy L. Durksen, Hoa Nguyen, Robert M. Klassen

Abstract:

This paper reports a study investigating the development and implementation of an online multi-session ‘scenario-based learning’ (SBL) program administered to preservice teachers in Australia. The transition from initial teacher education to the teaching profession can present numerous cognitive and psychological challenges for early career teachers. Therefore, the identification of additional supports, such as scenario-based learning, that can supplement existing teacher education programs may help preservice teachers to feel more confident and prepared for the realities and complexities of teaching. Scenario-based learning is grounded in situated learning theory which holds that learning is most powerful when it is embedded within its authentic context. SBL exposes participants to complex and realistic workplace situations in a supportive environment and has been used extensively to help prepare students in other professions, such as legal and medical education. However, comparatively limited attention has been paid to investigating the effects of SBL in teacher education. In the present study, the SBL intervention provided participants with the opportunity to virtually engage with school-based scenarios, reflect on how they might respond to a series of plausible response options, and receive real-time feedback from experienced educators. The development process involved several stages, including collaboration with experienced educators to determine the scenario content based on ‘critical incidents’ they had encountered during their teaching careers, the establishment of the scoring key, the development of the expert feedback, and an extensive review process to refine the program content. The 4-part SBL program focused on areas that can be challenging in the beginning stages of a teaching career, including managing student behaviour and workload, differentiating the curriculum, and building relationships with colleagues, parents, and the community. Results from prior studies implemented by the research group using a similar 4-part format have shown a statistically significant increase in preservice teachers’ self-efficacy and classroom readiness from the pre-test to the final post-test. In the current research, professional teaching identity - incorporating self-efficacy, motivation, self-image, satisfaction, and commitment to teaching - was measured over six weeks at multiple time points: before, during, and after the 4-part scenario-based learning program. Analyses included latent growth curve modelling to assess the trajectory of change in the outcome variables throughout the intervention. The paper outlines (1) the theoretical underpinnings of SBL, (2) the development of the SBL program and methodology, and (3) the results from the study, including the impact of the SBL program on aspects of participating preservice teachers’ professional identity. The study shows how SBL interventions can be implemented alongside the initial teacher education curriculum to help prepare preservice teachers for the transition from student to teacher.

Keywords: classroom simulations, e-learning, initial teacher education, preservice teachers, professional learning, professional teaching identity, scenario-based learning, teacher development

Procedia PDF Downloads 71
172 IEEE802.15.4e Based Scheduling Mechanisms and Systems for Industrial Internet of Things

Authors: Ho-Ting Wu, Kai-Wei Ke, Bo-Yu Huang, Liang-Lin Yan, Chun-Ting Lin

Abstract:

With the advances in advanced technology, wireless sensor network (WSN) has become one of the most promising candidates to implement the wireless industrial internet of things (IIOT) architecture. However, the legacy IEEE 802.15.4 based WSN technology such as Zigbee system cannot meet the stringent QoS requirement of low powered, real-time, and highly reliable transmission imposed by the IIOT environment. Recently, the IEEE society developed IEEE 802.15.4e Time Slotted Channel Hopping (TSCH) access mode to serve this purpose. Furthermore, the IETF 6TiSCH working group has proposed standards to integrate IEEE 802.15.4e with IPv6 protocol smoothly to form a complete protocol stack for IIOT. In this work, we develop key network technologies for IEEE 802.15.4e based wireless IIoT architecture, focusing on practical design and system implementation. We realize the OpenWSN-based wireless IIOT system. The system architecture is divided into three main parts: web server, network manager, and sensor nodes. The web server provides user interface, allowing the user to view the status of sensor nodes and instruct sensor nodes to follow commands via user-friendly browser. The network manager is responsible for the establishment, maintenance, and management of scheduling and topology information. It executes centralized scheduling algorithm, sends the scheduling table to each node, as well as manages the sensing tasks of each device. Sensor nodes complete the assigned tasks and sends the sensed data. Furthermore, to prevent scheduling error due to packet loss, a schedule inspection mechanism is implemented to verify the correctness of the schedule table. In addition, when network topology changes, the system will act to generate a new schedule table based on the changed topology for ensuring the proper operation of the system. To enhance the system performance of such system, we further propose dynamic bandwidth allocation and distributed scheduling mechanisms. The developed distributed scheduling mechanism enables each individual sensor node to build, maintain and manage the dedicated link bandwidth with its parent and children nodes based on locally observed information by exchanging the Add/Delete commands via two processes. The first process, termed as the schedule initialization process, allows each sensor node pair to identify the available idle slots to allocate the basic dedicated transmission bandwidth. The second process, termed as the schedule adjustment process, enables each sensor node pair to adjust their allocated bandwidth dynamically according to the measured traffic loading. Such technology can sufficiently satisfy the dynamic bandwidth requirement in the frequently changing environments. Last but not least, we propose a packet retransmission scheme to enhance the system performance of the centralized scheduling algorithm when the packet delivery rate (PDR) is low. We propose a multi-frame retransmission mechanism to allow every single network node to resend each packet for at least the predefined number of times. The multi frame architecture is built according to the number of layers of the network topology. Performance results via simulation reveal that such retransmission scheme is able to provide sufficient high transmission reliability while maintaining low packet transmission latency. Therefore, the QoS requirement of IIoT can be achieved.

Keywords: IEEE 802.15.4e, industrial internet of things (IIOT), scheduling mechanisms, wireless sensor networks (WSN)

Procedia PDF Downloads 161
171 Removal of VOCs from Gas Streams with Double Perovskite-Type Catalyst

Authors: Kuan Lun Pan, Moo Been Chang

Abstract:

Volatile organic compounds (VOCs) are one of major air contaminants, and they can react with nitrogen oxides (NOx) in atmosphere to form ozone (O3) and peroxyacetyl nitrate (PAN) with solar irradiation, leading to environmental hazards. In addition, some VOCs are toxic at low concentration levels and cause adverse effects on human health. How to effectively reduce VOCs emission has become an important issue. Thermal catalysis is regarded as an effective way for VOCs removal because it provides oxidation route to successfully convert VOCs into carbon dioxide (CO2) and water (H2O(g)). Single perovskite-type catalysts are promising for VOC removal, and they are of good potential to replace noble metals due to good activity and high thermal stability. Single perovskites can be generally described as ABO3 or A2BO4, where A-site is often a rare earth element or an alkaline. Typically, the B-site is transition metal cation (Fe, Cu, Ni, Co, or Mn). Catalytic properties of perovskites mainly rely on nature, oxidation states and arrangement of B-site cation. Interestingly, single perovskites could be further synthesized to form double perovskite-type catalysts which can simply be represented by A2B’B”O6. Likewise, A-site stands for an alkaline metal or rare earth element, and the B′ and B′′ are transition metals. Double perovskites possess unique surface properties. In structure, three-dimensional of B-site with ordered arrangement of B’O6 and B”O6 is presented alternately, and they corner-share octahedral along three directions of the crystal lattice, while cations of A-site position between the void of octahedral. It has attracted considerable attention due to specific arrangement of alternating B-site structure. Therefore, double perovskites may have more variations than single perovskites, and this greater variation may promote catalytic performance. It is expected that activity of double perovskites is higher than that of single perovskites toward VOC removal. In this study, double perovskite-type catalyst (La2CoMnO6) is prepared and evaluated for VOC removal. Also, single perovskites including LaCoO3 and LaMnO3 are tested for the comparison purpose. Toluene (C7H8) is one of the important VOCs which are commonly applied in chemical processes. In addition to its wide application, C7H8 has high toxicity at a low concentration. Therefore, C7H8 is selected as the target compound in this study. Experimental results indicate that double perovskite (La2CoMnO6) has better activity if compared with single perovskites. Especially, C7H8 can be completely oxidized to CO2 at 300oC as La2CoMnO6 is applied. Characterization of catalysts indicates that double perovskite has unique surface properties and is of higher amounts of lattice oxygen, leading to higher activity. For durability test, La2CoMnO6 maintains high C7H8 removal efficiency of 100% at 300oC and 30,000 h-1, and it also shows good resistance to CO2 (5%) and H2O(g) (5%) of gas streams tested. For various VOCs including isopropyl alcohol (C3H8O), ethanal (C2H4O), and ethylene (C2H4) tested, as high as 100% efficiency could be achieved with double perovskite-type catalyst operated at 300℃, indicating that double perovskites are promising catalysts for VOCs removal, and possible mechanisms will be elucidated in this paper.

Keywords: volatile organic compounds, Toluene (C7H8), double perovskite-type catalyst, catalysis

Procedia PDF Downloads 165
170 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa

Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini

Abstract:

Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.

Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time

Procedia PDF Downloads 152
169 A Lightning Strike Mimic: The Abusive Use of Dog Shock Collar Presents as Encephalopathy, Respiratory Arrest, Cardiogenic Shock, Severe Hypernatremia, Rhabdomyolysis, and Multiorgan Injury

Authors: Merrick Lopez, Aashish Abraham, Melissa Egge, Marissa Hood, Jui Shah

Abstract:

A 3 year old male with unknown medical history presented initially with encephalopathy, intubated for respiratory failure, and admitted to the pediatric intensive care unit (PICU) with refractory shock. During resuscitation in the emergency department, he was found to be in severe metabolic acidosis with a pH of 7.03 and escalated on vasopressor drips for hypotension. His initial sodium was 174. He was noted to have burn injuries to his scalp, forehead, right axilla, bilateral arm creases and lower legs. He had rhabdomyolysis (initial creatinine kinase 5,430 U/L with peak levels of 62,340 normal <335 U/L), cardiac injury (initial troponin 88 ng/L with peak at 145 ng/L, normal <15ng/L), hypernatremia (peak 174, normal 140), hypocalcemia, liver injury, acute kidney injury, and neuronal loss on magnetic resonance imaging (MRI). Soft restraints and a shock collar were found in the home. He was critically ill for 8 days, but was gradually weaned off drips, extubated, and started on feeds. Discussion Electrical injury, specifically lightning injury is an uncommon but devastating cause of injury in pediatric patients. This patient with suspected abusive use of a dog shock collar presented similar to a lightning strike. Common entrance points include the hands and head, similar to our patient with linear wounds on his forehead. When current enters, it passes through tissues with the least resistance. Nerves, blood vessels, and muscles, have high fluid and electrolyte content and are commonly affected. Exit points are extremities: our child who had circumferential burns around his arm creases and ankles. Linear burns preferentially follow areas of high sweat concentration, and are thought to be due to vaporization of water on the skin’s surface. The most common cause of death from a lightning strike is due to cardiopulmonary arrest. The massive depolarization of the myocardium can result in arrhythmias and myocardial necrosis. The patient presented in cardiogenic shock with evident cardiac damage. Electricity going through vessels can lead to vaporization of intravascular water. This can explain his severe hypernatremia. He also sustained other internal organ injuries (adrenal glands, pancreas, liver, and kidney). Electrical discharge also leads to direct skeletal muscle injury in addition to prolonged muscular spasm. Rhabdomyolysis, the acute damage of muscle, leads to release of potentially toxic components into the circulation which could lead to acute renal failure. The patient had severe rhabdomyolysis and renal injury. Early hypocalcemia has been consistently demonstrated in patients with rhabdomyolysis. This was present in the patient and led to increased vasopressor needs. Central nervous system injuries are also common which can include encephalopathy, hypoxic injury, and cerebral infarction. The patient had evidence of brain injury as seen on MRI. Conclusion Electrical injuries due to lightning strikes and abusive use of a dog shock collar are rare, but can both present in similar ways with respiratory failure, shock, hypernatremia, rhabdomyolysis, brain injury, and multiorgan damage. Although rare, it is essential for early identification and prompt management for acute and chronic complications in these children.

Keywords: cardiogenic shock, dog shock collar, lightning strike, rhabdomyolysis

Procedia PDF Downloads 88
168 Preliminary Evaluation of Echinacea Species by UV-VIS Spectroscopy Fingerprinting of Phenolic Compounds

Authors: Elena Ionescu, Elena Iacob, Marie-Louise Ionescu, Carmen Elena Tebrencu, Oana Teodora Ciuperca

Abstract:

Echinacea species (Asteraceae) has received a global attention because it is widely used for treatment of cold, flu and upper respiratory tract infections. Echinacea species contain a great variety of chemical components that contribute to their activity. The most important components responsible for the biological activity are those with high molecular-weight such as polysaccharides, polyacetylenes, highly unsaturated alkamides and caffeic acid derivatives. The principal factors that may influence the chemical composition of Echinacea include the species and the part of plant used (aerial parts or roots ). In recent years the market for Echinacea has grown rapidly and also the cases of adultery/replacement especially for Echinacea root. The identification of presence or absence of same biomarkers provide information for safe use of Echinacea species in food supplements industry. The aim of the study was the preliminary evaluation and fingerprinting by UV-VISIBLE spectroscopy of biomarkers in terms of content in phenolic derivatives of some Echinacea species (E. purpurea, E. angustifolia and E. pallida) for identification and authentication of the species. The steps of the study were: (1) samples (extracts) preparation from Echinacea species (non-hydrolyzed and hydrolyzed ethanol extracts); (2) samples preparation of reference substances (polyphenol acids: caftaric acid, caffeic acid, chlorogenic acid, ferulic acid; flavonoids: rutoside, hyperoside, isoquercitrin and their aglycones: quercitri, quercetol, luteolin, kaempferol and apigenin); (3) identification of specific absorption at wavelengths between 700-200 nm; (4) identify the phenolic compounds from Echinacea species based on spectral characteristics and the specific absorption; each class of compounds corresponds to a maximum absorption in the UV spectrum. The phytochemical compounds were identified at specific wavelengths between 700-200 nm. The absorption intensities were measured. The obtained results proved that ethanolic extract showed absorption peaks attributed to: phenolic compounds (free phenolic acids and phenolic acids derivatives) registrated between 220-280 nm, unsymmetrical chemical structure compounds (caffeic acid, chlorogenic acid, ferulic acid) with maximum absorption peak and absorption "shoulder" that may be due to substitution of hydroxyl or methoxy group, flavonoid compounds (in free form or glycosides) between 330-360 nm, due to the double bond in position 2,3 and carbonyl group in position 4 flavonols. UV spectra showed two major peaks of absorption (quercetin glycoside, rutin, etc.). The results obtained by UV-VIS spectroscopy has revealed the presence of phenolic derivatives such as cicoric acid (240 nm), caftaric acid (329 nm), caffeic acid (240 nm), rutoside (205 nm), quercetin (255 nm), luteolin (235 nm) in all three species of Echinacea. The echinacoside is absent. This profile mentioned above and the absence of phenolic compound echinacoside leads to the conclusion that species harvested as Echinacea angustifolia and Echinacea pallida are Echinacea purpurea also; It can be said that preliminary fingerprinting of Echinacea species through correspondence with the phenolic derivatives profile can be achieved by UV-VIS spectroscopic investigation, which is an adequate technique for preliminary identification and authentication of Echinacea in medicinal herbs.

Keywords: Echinacea species, Fingerprinting, Phenolic compounds, UV-VIS spectroscopy

Procedia PDF Downloads 261
167 Achieving Flow at Work: An Experience Sampling Study to Comprehend How Cognitive Task Characteristics and Work Environments Predict Flow Experiences

Authors: Jonas De Kerf, Rein De Cooman, Sara De Gieter

Abstract:

For many decades, scholars have aimed to understand how work can become more meaningful by maximizing both potential and enhancing feelings of satisfaction. One of the largest contributions towards such positive psychology was made with the introduction of the concept of ‘flow,’ which refers to a condition in which people feel intense engagement and effortless action. Since then, valuable research on work-related flow has indicated that this state of mind is related to positive outcomes for both organizations (e.g., social, supportive climates) and workers (e.g., job satisfaction). Yet, scholars still do not fully comprehend how such deep involvement at work is obtained, given the notion that flow is considered a short-term, complex, and dynamic experience. Most research neglects that people who experience flow ought to be optimally challenged so that intense concentration is required. Because attention is at the core of this enjoyable state of mind, this study aims to comprehend how elements that affect workers’ cognitive functioning impact flow at work. Research on cognitive performance suggests that working on mentally demanding tasks (e.g., information processing tasks) requires workers to concentrate deeply, as a result leading to flow experiences. Based on social facilitation theory, working on such tasks in an isolated environment eases concentration. Prior research has indicated that working at home (instead of working at the office) or in a closed office (rather than in an open-plan office) impacts employees’ overall functioning in terms of concentration and productivity. Consequently, we advance such knowledge and propose an interaction by combining cognitive task characteristics and work environments among part-time teleworkers. Hence, we not only aim to shed light on the relation between cognitive tasks and flow but also provide empirical evidence that workers performing such tasks achieve the highest states of flow while working either at home or in closed offices. In July 2022, an experience-sampling study will be conducted that uses a semi-random signal schedule to understand how task and environment predictors together impact part-time teleworkers’ flow. More precisely, about 150 knowledge workers will fill in multiple surveys a day for two consecutive workweeks to report their flow experiences, cognitive tasks, and work environments. Preliminary results from a pilot study indicate that on a between level, tasks high in information processing go along with high self-reported fluent productivity (i.e., making progress). As expected, evidence was found for higher fluency in productivity for workers performing information processing tasks both at home and in a closed office, compared to those performing the same tasks at the office or in open-plan offices. This study expands the current knowledge on work-related flow by looking at a task and environmental predictors that enable workers to obtain such a peak state. While doing so, our findings suggest that practitioners should strive for ideal alignments between tasks and work locations to work with both deep involvement and gratification.

Keywords: cognitive work, office lay-out, work location, work-related flow

Procedia PDF Downloads 101
166 Cultural Intelligence for the Managers of Tomorrow: A Data-Based Analysis of the Antecedents and Training Needs of Today’s Business School Students

Authors: Justin Byrne, Jose Ramon Cobo

Abstract:

The growing importance of cross- or intercultural competencies (used here interchangeably) for the business and management professionals is now a commonplace in both academic and professional literature. This reflects two parallel developments. On the one hand, it is a consequence of the increased attention paid to a whole range of 'soft skills', now seen as fundamental in both individuals' and corporate success. On the other hand, and more specifically, the increasing demand for interculturally competent professionals is a corollary of ongoing processes of globalization, which multiply and intensify encounters between individuals and companies from different cultural backgrounds. Business schools have, for some decades, responded to the needs of the job market and their own students by providing students with training in intercultural skills, as they are encouraged to do so by the major accreditation agencies on both sides of the Atlantic. Adapting Early and Ang's (2003) formulation of Cultural Intelligence (CQ), this paper aims to help fill the lagunae in the current literature on intercultural training in three main ways. First, it offers an in-depth analysis of the CQ of a little studied group: contemporary Millenial and 'Generation Z' Business School students. The level of analysis distinguishes between the four different dimensions of CQ, cognition, metacognition, motivation and behaviour, and thereby provides a detailed picture of the strengths and weaknesses in CQ of the group as a whole, as well as of different sub-groups and profiles of students. Secondly, by crossing these individual-level findings with respondents' socio-cultural and educational data, this paper also proposes and tests hypotheses regarding the relative impact and importance of four possible antecedents of intercultural skills identified in the literature: prior international experience; intercultural training, foreign language proficiency, and experience of cultural diversity in habitual country of residence. Third, we use this analysis to suggest data-based intercultural training priorities for today's management students. These conclusions are based on the statistical analysis of individual responses of some 300 Bachelor or Masters students in a major European Business School provided to two on-line surveys: Ang, Van Dyne, et al's (2007) standard 20-question self-reporting CQ Scale, and an original questionnaire designed by the authors to collate information on respondent's socio-demographic and educational profile relevant to our four hypotheses and explanatory variables. The data from both instruments was crossed in both descriptive statistical analysis and regression analysis. This research shows that there is no statistically significant and positive relationship between the four antecedents analyzed and overall CQ level. The exception in this respect is the statistically significant correlation between international experience, and the cognitive dimension of CQ. In contrast, the results show that the combination of international experience and foreign language skills acting together, does have a strong overall impact on CQ levels. These results suggest that selecting and/or training students with strong foreign language skills and providing them with international experience (through multinational programmes, academic exchanges or international internships) constitutes one effective way of training culturally intelligent managers of tomorrow.

Keywords: business school, cultural intelligence, millennial, training

Procedia PDF Downloads 158