Search results for: difficult airway
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2715

Search results for: difficult airway

345 A Laser Instrument Rapid-E+ for Real-Time Measurements of Airborne Bioaerosols Such as Bacteria, Fungi, and Pollen

Authors: Minghui Zhang, Sirine Fkaier, Sabri Fernana, Svetlana Kiseleva, Denis Kiselev

Abstract:

The real-time identification of bacteria and fungi is difficult because they emit much weaker signals than pollen. In 2020, Plair developed Rapid-E+, which extends abilities of Rapid-E to detect smaller bioaerosols such as bacteria and fungal spores with diameters down to 0.3 µm, while keeping the similar or even better capability for measurements of large bioaerosols like pollen. Rapid-E+ enables simultaneous measurements of (1) time-resolved, polarization and angle dependent Mie scattering patterns, (2) fluorescence spectra resolved in 16 channels, and (3) fluorescence lifetime of individual particles. Moreover, (4) it provides 2D Mie scattering images which give the full information on particle morphology. The parameters of every single bioaerosol aspired into the instrument are subsequently analysed by machine learning. Firstly, pure species of microbes, e.g., Bacillus subtilis (a species of bacteria), and Penicillium chrysogenum (a species of fungal spores), were aerosolized in a bioaerosol chamber for Rapid-E+ training. Afterwards, we tested microbes under different concentrations. We used several steps of data analysis to classify and identify microbes. All single particles were analysed by the parameters of light scattering and fluorescence in the following steps. (1) They were treated with a smart filter block to get rid of non-microbes. (2) By classification algorithm, we verified the filtered particles were microbes based on the calibration data. (3) The probability threshold (defined by the user) step provides the probability of being microbes ranging from 0 to 100%. We demonstrate how Rapid-E+ identified simultaneously microbes based on the results of Bacillus subtilis (bacteria) and Penicillium chrysogenum (fungal spores). By using machine learning, Rapid-E+ achieved identification precision of 99% against the background. The further classification suggests the precision of 87% and 89% for Bacillus subtilis and Penicillium chrysogenum, respectively. The developed algorithm was subsequently used to evaluate the performance of microbe classification and quantification in real-time. The bacteria and fungi were aerosolized again in the chamber with different concentrations. Rapid-E+ can classify different types of microbes and then quantify them in real-time. Rapid-E+ enables classifying different types of microbes and quantifying them in real-time. Rapid-E+ can identify pollen down to species with similar or even better performance than the previous version (Rapid-E). Therefore, Rapid-E+ is an all-in-one instrument which classifies and quantifies not only pollen, but also bacteria and fungi. Based on the machine learning platform, the user can further develop proprietary algorithms for specific microbes (e.g., virus aerosols) and other aerosols (e.g., combustion-related particles that contain polycyclic aromatic hydrocarbons).

Keywords: bioaerosols, laser-induced fluorescence, Mie-scattering, microorganisms

Procedia PDF Downloads 91
344 Synergistic Effect of Chondroinductive Growth Factors and Synovium-Derived Mesenchymal Stem Cells on Regeneration of Cartilage Defects in Rabbits

Authors: M. Karzhauov, А. Mukhambetova, M. Sarsenova, E. Raimagambetov, V. Ogay

Abstract:

Regeneration of injured articular cartilage remains one of the most difficult and unsolved problems in traumatology and orthopedics. Currently, for the treatment of cartilage defects surgical techniques for stimulation of the regeneration of cartilage in damaged joints such as multiple microperforation, mosaic chondroplasty, abrasion and microfractures is used. However, as shown by clinical practice, they can not provide a full and sustainable recovery of articular hyaline cartilage. In this regard, the current high hopes in the regeneration of cartilage defects reasonably are associated with the use of tissue engineering approaches to restore the structural and functional characteristics of damaged joints using stem cells, growth factors and biopolymers or scaffolds. The purpose of the present study was to investigate the effects of chondroinductive growth factors and synovium-derived mesenchymal stem cells (SD-MSCs) on the regeneration of cartilage defects in rabbits. SD-MSCs were isolated from the synovium membrane of Flemish giant rabbits, and expanded in complete culture medium α-MEM. Rabbit SD-MSCs were characterized by CFU-assay and by their ability to differentiate into osteoblasts, chondrocytes and adipocytes. The effects of growth factors (TGF-β1, BMP-2, BMP-4 and IGF-I) on MSC chondrogenesis were examined in micromass pellet cultures using histological and biochemical analysis. Articular cartilage defect (4mm in diameter) in the intercondylar groove of the patellofemoral joint was performed with a kit for the mosaic chondroplasty. The defect was made until subchondral bone plate. Delivery of SD-MSCs and growth factors was conducted in combination with hyaloronic acid (HA). SD-MSCs, growth factors and control groups were compared macroscopically and histologically at 10, 30, 60 and 90 days aftrer intra-articular injection. Our in vitro comparative study revealed that TGF-β1 and BMP-4 are key chondroinductive factors for both the growth and chondrogenesis of SD-MSCs. The highest effect on MSC chondrogenesis was observed with the synergistic interaction of TGF-β1 and BMP-4. In addition, biochemical analysis of the chondrogenic micromass pellets also revealed that the levels of glycosaminoglycans and DNA after combined treatment with TGF-β1 and BMP-4 was significantly higher in comparison to individual application of these factors. In vivo study showed that for complete regeneration of cartilage defects with intra-articular injection of SD-MSCs with HA takes time 90 days. However, single injection of SD-MSCs in combiantion with TGF-β1, BMP-4 and HA significantly promoted regeneration rate of the cartilage defects in rabbits. In this case, complete regeneration of cartilage defects was observed in 30 days after intra-articular injection. Thus, our in vitro and in vivo study demonstrated that combined application of rabbit SD-MSC with chondroinductive growth factors and HA results in strong synergistic effect on the chondrogenesis significantly enhancing regeneration of the damaged cartilage.

Keywords: Mesenchymal stem cells, synovium, chondroinductive factors, TGF-β1, BMP-2, BMP-4, IGF-I

Procedia PDF Downloads 306
343 Information Seeking and Evaluation Tasks to Enhance Multiliteracies in Health Education

Authors: Tuula Nygard

Abstract:

This study contributes to the pedagogical discussion on how to promote adolescents’ multiliteracies with the emphasis on information seeking and evaluation skills in contemporary media environments. The study is conducted in the school environment utilizing perspectives of educational sciences and information studies to health communication and teaching. The research focus is on the teacher role as a trusted person, who guides students to choose and use credible information sources. Evaluating the credibility of information may often be challenging. Specifically, children and adolescents may find it difficult to know what to believe and who to trust, for instance, in health and well-being communication. Thus, advanced multiliteracy skills are needed. In the school environment, trust is based on the teacher’s subject content knowledge, but also the teacher’s character and caring. Teacher’s benevolence and approachability generate trustworthiness, which lays the foundation for good interaction with students and further, for the teacher’s pedagogical authority. The study explores teachers’ perceptions of their pedagogical authority and the role of a trustee. In addition, the study examines what kind of multiliteracy practices teachers utilize in their teaching. The data will be collected by interviewing secondary school health education teachers during Spring 2019. The analysis method is a nexus analysis, which is an ethnographic research orientation. Classroom interaction as the interviewed teachers see it is scrutinized through a nexus analysis lens in order to expound a social action, where people, places, discourses, and objects are intertwined. The crucial social actions in this study are information seeking and evaluation situations, where the teacher and the students together assess the credibility of the information sources. The study is based on the hypothesis that a trustee’s opinions of credible sources and guidance in information seeking and evaluation affect students’, that is, trustors’ choices. In the school context, the teacher’s own experiences and perceptions of health-related issues cannot be brushed aside. Furthermore, adolescents are used to utilize digital technology for day-to-day information seeking, but the chosen information sources are often not very high quality. In the school, teachers are inclined to recommend familiar sources, such as health education textbook and web pages of well-known health authorities. Students, in turn, rely on the teacher’s guidance of credible information sources without using their own judgment. In terms of students’ multiliteracy competences, information seeking and evaluation tasks in health education are excellent opportunities to practice and enhance these skills. To distinguish the right information from a wrong one is particularly important in health communication because experts by experience are easy to find and their opinions are convincing. This can be addressed by employing the ideas of multiliteracy in the school subject health education and in teacher education and training.

Keywords: multiliteracies, nexus analysis, pedagogical authority, trust

Procedia PDF Downloads 109
342 Explaining Irregularity in Music by Entropy and Information Content

Authors: Lorena Mihelac, Janez Povh

Abstract:

In 2017, we conducted a research study using data consisting of 160 musical excerpts from different musical styles, to analyze the impact of entropy of the harmony on the acceptability of music. In measuring the entropy of harmony, we were interested in unigrams (individual chords in the harmonic progression) and bigrams (the connection of two adjacent chords). In this study, it has been found that 53 musical excerpts out from 160 were evaluated by participants as very complex, although the entropy of the harmonic progression (unigrams and bigrams) was calculated as low. We have explained this by particularities of chord progression, which impact the listener's feeling of complexity and acceptability. We have evaluated the same data twice with new participants in 2018 and with the same participants for the third time in 2019. These three evaluations have shown that the same 53 musical excerpts, found to be difficult and complex in the study conducted in 2017, are exhibiting a high feeling of complexity again. It was proposed that the content of these musical excerpts, defined as “irregular,” is not meeting the listener's expectancy and the basic perceptual principles, creating a higher feeling of difficulty and complexity. As the “irregularities” in these 53 musical excerpts seem to be perceived by the participants without being aware of it, affecting the pleasantness and the feeling of complexity, they have been defined as “subliminal irregularities” and the 53 musical excerpts as “irregular.” In our recent study (2019) of the same data (used in previous research works), we have proposed a new measure of the complexity of harmony, “regularity,” based on the irregularities in the harmonic progression and other plausible particularities in the musical structure found in previous studies. We have in this study also proposed a list of 10 different particularities for which we were assuming that they are impacting the participant’s perception of complexity in harmony. These ten particularities have been tested in this paper, by extending the analysis in our 53 irregular musical excerpts from harmony to melody. In the examining of melody, we have used the computational model “Information Dynamics of Music” (IDyOM) and two information-theoretic measures: entropy - the uncertainty of the prediction before the next event is heard, and information content - the unexpectedness of an event in a sequence. In order to describe the features of melody in these musical examples, we have used four different viewpoints: pitch, interval, duration, scale degree. The results have shown that the texture of melody (e.g., multiple voices, homorhythmic structure) and structure of melody (e.g., huge interval leaps, syncopated rhythm, implied harmony in compound melodies) in these musical excerpts are impacting the participant’s perception of complexity. High information content values were found in compound melodies in which implied harmonies seem to have suggested additional harmonies, affecting the participant’s perception of the chord progression in harmony by creating a sense of an ambiguous musical structure.

Keywords: entropy and information content, harmony, subliminal (ir)regularity, IDyOM

Procedia PDF Downloads 133
341 Genetic Diversity of Cord Blood of the National Center of Blood Transfusion, Mexico (NCBT)

Authors: J. Manuel Bello-López, Julieta Rojo-Medina

Abstract:

Introduction: The transplant of Umbilical Cord Blood Units (UCBU) are a therapeutic possibility for patients with oncohaematological disorders, especially in children. In Mexico, 48.5% of oncological diseases in children 1-4 years old are leukemias; whereas in patients 5-14 and 15-24 years old, lymphomas and leukemias represent the second and third cause of death in these groups respectively. Therefore it is necessary to have more registries of UCBU in order to ensure genetic diversity in the country; the above because the search for appropriate a UCBU is increasingly difficult for patients of mixed ethnicity. Objective: To estimate the genetic diversity (polymorphisms) of Human Leucocyte Antigen (HLA) Class I (A, B) and Class II (DRB1) in UCBU cryopreserved for transplant at Cord Blood Bank of the NCBT. Material and Methods: HLA typing of 533 UCBU for transplant was performed from 2003-2012 at the Histocompatibility Laboratory from the Research Department (evaluated by Los Angeles Ca. Immunogenetics Center) of the NCBT. Class I HLA-A, HLA-B and Class II HLA-DRB1 typing was performed using medium resolution Sequence-Specific Primer (SSP). In cases of an ambiguity detected by SSP; Sequence-Specific Oligonucleotide (SSO) method was carried out. A strict analysis of populations genetic parameters were done in 5 representative UCBU populations. Results: 46.5% of UCBU were collected from Mexico City, State of Mexico (30.95%), Puebla (8.06%), Morelos (6.37%) and Veracruz (3.37%). The remaining UCBU (4.75%) are represented by other states. The identified genotypes correspond to Amerindian origins (HLA-A*02, 31; HLA-B*39, 15, 48), Caucasian (HLA-A*02, 68, 01, 30, 31; HLA-B*35, 15, 40, 44, 07 y HLA-DRB1*04, 08, 07, 15, 03, 14), Oriental (HLA-A*02, 30, 01, 31; HLA-B* 35, 39, 15, 40, 44, 07,48 y HLA-DRB1*04, 07,15, 03) and African (HLA-A*30 y HLA-DRB1*03). The genetic distances obtained by Cavalli-Sforza analysis of the five states showed significant genetic differences by comparing genetic frequencies. The shortest genetic distance exists between Mexico City and the state of Puebla (0.0039) and the largest between Veracruz and Morelos (0.0084). In order to identify significant differences between this states, the ANOVA test was performed. This demonstrates that UCBU is significantly different according to their origin (P <0.05). This is shown by the divergence between arms at the Dendogram of Neighbor-Joining. Conclusions: The NCBT provides UCBU in patients with oncohaematological disorders in all the country. There is a group of patients for which not compatible UCBU can be find due to the mixed ethnic origin. For example, the population of northern Mexico is mostly Caucasian. Most of the NCBT donors are of various ethnic origins, predominantly Amerindians and Caucasians; although some ethnic minorities like Oriental, African and pure Indian ethnics are not represented. The NCBT is, therefore, establishing agreements with different states of Mexico to promote the altruistic donation of Umbilical Cord Blood in order to enrich the genetic diversity in its files.

Keywords: cord blood, genetic diversity, human leucocyte antigen, transplant

Procedia PDF Downloads 382
340 Mental Health and Secondary Trauma in Service Providers Working with Refugees

Authors: Marko Živanović, Jovana Bjekić, Maša Vukčević Marković

Abstract:

Professionals and volunteers involved in refugee protection and support are on a daily basis faced with people who have experienced numerous traumatic experiences and, as such, are subjected to secondary traumatization (ST). The aim of this study was to provide insight into risk factors for ST in helpers working with refugees in Serbia. A total of 175 participants working with refugees fulfilled: Secondary Traumatization Questionnaire, checklist of refugees’ traumatic experiences, Hopkins Symptoms Checklist (HSCL) assessing depression and anxiety symptoms, quality of life questionnaire (MANSA), HEXACO personality inventory, and COPE assessing coping mechanisms. In addition, participants provided information on work-related problems. Qualitative analysis of answers to the question about most difficult part of their job has shown that burnout-related issues are clustered around three recurrent topics that can be considered as the most prominent generators of stress, namely: ‘lack of organization and cooperation’, ‘not been able to do enough’, and ‘hard to take it and to process it’. Factor analysis (Maximum likelihood extraction, Promax rotation) have shown that ST comprises of two correlated factors (r = .533, p < .01), namely Psychological deficits and Intrusions. Results have shown that risk factor for ST could be find in three interrelated sources: 1) work-related problems; 2) personality-related risk factors and 3) clients’ traumatic experiences. Among personality related factors, it was shown that risk factor for Intrusions could be find in – high Emotionality (β = .221, p < .05), and Altruism (β = .322, p < .01), while low Extraversion (β = -.365, p < .01) represents risk factor for Psychological deficits. In addition, usage of maladaptive coping mechanisms –mental disengagement (r = .253, p < .01), behavioral disengagement (r = .274, p < .01), focusing on distress and venting of emotions (r = .220, p < .05), denial (r = .164, p < .05), and substance use (r = .232, p < .01) correlate with Psychological deficits while Intrusions corelate with Mental disengagement (r = .251, p < .01) and denial (r = .183, p < .05). Regarding clients’ traumatic experiences it was shown that both quantity of traumatic events in country of origin (for Deficits r = .226, p < .01; for Intrusions r = .174, p < .05) and in transit (for Deficits r = .288, p < .01), as well as certain content-related features of such experiences (especially experiences which are severely dislocated from ‘everyday reality’) are related to ST. In addition, Psychological deficits and Intrusions have shown to be accompanied by symptoms of depression (r = .760, p < .01; r = .552, p < .01) and anxiety (r = .740, p < .01; r = .447, p < .01) and overall lower life quality (r = -.454, p < .01; r = .256, p < .01). Results indicate that psychological vulnerability of persons who are working with traumatized individuals can be found in certain personality traits, and usage of maladaptive coping mechanisms, which disable one to deal with work-related issues, and to cope with quantity and quality of traumatic experiences they were faced with, affecting ones’ psychological well-being. Acknowledgement: This research was funded by IRC Serbia.

Keywords: mental health, refugees, secondary traumatization, traumatic experiences

Procedia PDF Downloads 235
339 The Effects of Exercise Training on LDL Mediated Blood Flow in Coronary Artery Disease: A Systematic Review

Authors: Aziza Barnawi

Abstract:

Background: Regular exercise reduces risk factors associated with cardiovascular diseases. Over the past decade, exercise interventions have been introduced to reduce the risk of and prevent coronary artery disease (CAD). Elevated low-density lipoproteins (LDL) contribute to the formation of atherosclerosis, its manifestations on the endothelial narrow the coronary artery and affect the endothelial function. Therefore, flow-mediated dilation (FMD) technique is used to assess the function. The results of previous studies have been inconsistent and difficult to interpret across different types of exercise programs. The relationship between exercise therapy and lipid levels has been extensively studied, and it is known to improve the lipid profile and endothelial function. However, the effectiveness of exercise in altering LDL levels and improving blood flow is controversial. Objective: This review aims to explore the evidence and quantify the impact of exercise training on LDL levels and vascular function by FMD. Methods: Electronic databases were searched PubMed, Google Scholar, Web of Science, the Cochrane Library, and EBSCO using the keywords: “low and/or moderate aerobic training”, “blood flow”, “atherosclerosis”, “LDL mediated blood flow”, “Cardiac Rehabilitation”, “low-density lipoproteins”, “flow-mediated dilation”, “endothelial function”, “brachial artery flow-mediated dilation”, “oxidized low-density lipoproteins” and “coronary artery disease”. The studies were conducted for 6 weeks or more and influenced LDL levels and/or FMD. Studies with different intensity training and endurance training in healthy or CAD individuals were included. Results: Twenty-one randomized controlled trials (RCTs) (14 FMD and 7 LDL studies) with 776 participants (605 exercise participants and 171 control participants) met eligibility criteria and were included in the systematic review. Endurance training resulted in a greater reduction in LDL levels and their subfractions and a better FMD response. Overall, the training groups showed improved physical fitness status compared with the control groups. Participants whose exercise duration was ≥150 minutes /week had significant improvement in FMD and LDL levels compared with those with <150 minutes/week.Conclusion: In conclusion, although the relationship between physical training, LDL levels, and blood flow in CAD is complex and multifaceted, there are promising results for controlling primary and secondary prevention of CAD by exercise. Exercise training, including resistance, aerobic, and interval training, is positively correlated with improved FMD. However, the small body of evidence for LDL studies (resistance and interval training) did not prove to be significantly associated with improved blood flow. Increasing evidence suggests that exercise training is a promising adjunctive therapy to improve cardiovascular health, potentially improving blood flow and contributing to the overall management of CAD.

Keywords: exercise training, low density lipoprotein, flow mediated dilation, coronary artery disease

Procedia PDF Downloads 74
338 Honneth, Feenberg, and the Redemption of Critical Theory of Technology

Authors: David Schafer

Abstract:

Critical Theory is in sore need of a workable account of technology. It had one in the writings of Herbert Marcuse, or so it seemed until Jürgen Habermas mounted a critique in 'Technology and Science as Ideology' (Habermas, 1970) that decisively put it away. Ever since Marcuse’s work has been regarded outdated – a 'philosophy of consciousness' no longer seriously tenable. But with Marcuse’s view has gone the important insight that technology is no norm-free system (as Habermas portrays it) but can be laden with social bias. Andrew Feenberg is among a few serious scholars who have perceived this problem in post-Habermasian critical theory and has sought to revive a basically Marcusean account of technology. On his view, while so-called ‘technical elements’ that physically make up technologies are neutral with regard to social interests, there is a sense in which we may speak of a normative grammar or ‘technical code’ built-in to technology that can be socially biased in favor of certain groups over others (Feenberg, 2002). According to Feenberg, those perspectives on technology are reified which consider technology only by their technical elements to the neglect of their technical codes. Nevertheless, Feenberg’s account fails to explain what is normatively problematic with such reified views of technology. His plausible claim that they represent false perspectives on technology by itself does not explain how such views may be oppressive, even though Feenberg surely wants to be doing that stronger level of normative theorizing. Perceiving this deficit in his own account of reification, he tries to adopt Habermas’s version of systems-theory to ground his own critical theory of technology (Feenberg, 1999). But this is a curious move in light of Feenberg’s own legitimate critiques of Habermas’s portrayals of technology as reified or ‘norm-free.’ This paper argues that a better foundation may be found in Axel Honneth’s recent text, Freedom’s Right (Honneth, 2014). Though Honneth there says little explicitly about technology, he offers an implicit account of reification formulated in opposition to Habermas’s systems-theoretic approach. On this ‘normative functionalist’ account of reification, social spheres are reified when participants prioritize individualist ideals of freedom (moral and legal freedom) to the neglect of an intersubjective form of freedom-through-recognition that Honneth calls ‘social freedom.’ Such misprioritization is ultimately problematic because it is unsustainable: individual freedom is philosophically and institutionally dependent upon social freedom. The main difficulty in adopting Honneth’s social theory for the purposes of a theory of technology, however, is that the notion of social freedom is predicable only of social institutions, whereas it appears difficult to conceive of technology as an institution. Nevertheless, in light of Feenberg’s work, the idea that technology includes within itself a normative grammar (technical code) takes on much plausibility. To the extent that this normative grammar may be understood by the category of social freedom, Honneth’s dialectical account of the relationship between individual and social forms of freedom provides a more solid basis from which to ground the normative claims of Feenberg’s sociological account of technology than Habermas’s systems theory.

Keywords: Habermas, Honneth, technology, Feenberg

Procedia PDF Downloads 198
337 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 205
336 The Biomechanical Assessment of Balance and Gait for Stroke Patients and the Implications in the Diagnosis and Rehabilitation

Authors: A. Alzahrani, G. Arnold, W. Wang

Abstract:

Background: Stroke commonly occurs in middle-aged and elderly populations, and the diagnosis of early stroke is still difficult. Patients who have suffered a stroke have different balance and gait patterns from healthy people. Advanced techniques of motion analysis have been routinely used in the clinical assessment of cerebral palsy. However, so far, little research has been done on the direct diagnosis of early stroke patients using motion analysis. Objectives: The aim of this study was to investigate whether patients with stroke have different balance and gait from healthy people and which biomechanical parameters could be used to predict and diagnose potential patients who are at a potential risk to stroke. Methods: Thirteen patients with stroke were recruited as subjects whose gait and balance was analysed. Twenty normal subjects at the matched age participated in this study as a control group. All subjects’ gait and balance were collected using Vicon Nexus® to obtain the gait parameters, kinetic, and kinematic parameters of the hip, knee, and ankle joints in three planes of both limbs. Participants stood on force platforms to perform a single leg balance test. Then, they were asked to walk along a 10 m walkway at their comfortable speed. Participants performed 6 trials of single-leg balance for each side and 10 trials of walking. From the recorded trials, three good ones were analysed using the Vicon Plug-in-Gait model to obtain gait parameters, e.g., walking speed, cadence, stride length, and joint parameters, e.g., joint angle, force, moments, etc. Result: The temporal-spatial variables of Stroke subjects were compared with the healthy subjects; it was found that there was a significant difference (p < 0.05) between the groups. The step length, speed, cadence were lower in stroke subjects as compared to the healthy groups. The stroke patients group showed significantly decreased in gait speed (mean and SD: 0.85 ± 0.33 m/s), cadence ( 96.71 ± 16.14 step/min), and step length (0.509 ± 017 m) in compared to healthy people group whereas the gait speed was 1.2 ± 0.11 m/s, cadence 112 ± 8.33 step/min, and step length 0.648 ± 0.43 m. Moreover, it was observed that patients with stroke have significant differences in the ankle, hip, and knee joints’ kinematics in the sagittal and coronal planes. Also, the result showed that there was a significant difference between groups in the single-leg balance test, e.g., maintaining single-leg stance time in the stroke patients showed shorter duration (5.97 ± 6.36 s) in compared to healthy people group (14.36 ± 10.20 s). Conclusion: Our result showed that there are significantly differences between stroke patients and healthy subjects in the various aspects of gait analysis and balance test, as a consequences of these findings some of the biomechanical parameters such as joints kinematics, gait parameters, and single-leg stance balance test could be used in clinical practice to predict and diagnose potential patients who are at a high risk of further stroke.

Keywords: gait analysis, kinetics, kinematics, single-leg stance, Stroke

Procedia PDF Downloads 143
335 Blunt Abdominal Trauma Management in Adult Patients: An Investigation on Safety of Discharging Patients with Normal Initial Findings

Authors: Rahimi-Movaghar Vafa, Mansouri Pejman, Chardoli Mojtaba, Rezvani Samina

Abstract:

Introduction: Blunt abdominal trauma is one of the leading causes of morbidity and mortality in all age groups, but diagnosis of serious intra-abdominal pathology is difficult and most of the damages are obscure in the initial investigation. There is still controversy about which patients should undergo abdomen/pelvis CT, which patients needs more observation and which patients can be discharged safely The aim of this study was to determine that is it safe to discharge patients with blunt abdominal trauma with normal initial findings. Methods: This non-randomized cross-sectional study was conducted from September 2013 to September 2014 at two levels I trauma centers, Sina hospital and Rasoul-e-Akram hospital (Tehran, Iran). Our inclusion criteria were all patients were admitted for suspicious BAT and our exclusion criteria were patients that have serious head and neck, chest, spine and limb injuries which need surgical intervention, those who have unstable vital signs, pregnant women with a gestational age over 3 months and homeless or without exact home address. 390 patients with blunt trauma abdomen examined and the necessary data, including demographic data, the abdominal examination, FAST result, patients’ lab test results (hematocrit, base deficit, urine analysis) on admission and at 6 and 12 hours after admission were recorded. Patients with normal physical examination, laboratory tests and FAST were discharged from the ED during 12 hours with the explanation of the alarm signs and were followed up after 24 hours and 1 week by a telephone call. Patients with abnormal findings in physical examination, laboratory tests, and FAST underwent abdomino-pelvic CT scan. Results: The study included 390 patients with blunt abdominal trauma between 12 and 80 years of age (mean age, 37.0 ± 13.7 years) and the mean duration of hospitalization in patients was 7.4 ± 4.1 hours. 88.6% of the patients were discharged from hospital before 12 hours. Odds ratio (OR) for having any symptoms for discharge after 6 hours was 0.160 and after 12 hours was 0.117 hours, which is statistically significant. Among the variables age, systolic and diastolic blood pressure, heart rate, respiratory rate, hematocrit and base deficit at admission, 6 hours and 12 hours after admission showed no significant statistical relationship with discharge time. From our 390 patients, 190 patients have normal initial physical examination, lab data and FAST findings that didn’t show any signs or symptoms in their next assessment and in their follow up by the phone call. Conclusion: It is recommended that patients with no symptoms at admission (completely normal physical examination, ultrasound, normal hematocrit and normal base deficit and lack of microscopic hematuria) and good family and social status can be safely discharged from the emergency department.

Keywords: blunt abdominal trauma, patient discharge, emergency department, FAST

Procedia PDF Downloads 366
334 Perception of the End of a Same Sex Relationship and Preparation towards It: A Qualitative Research about Anticipation, Coping and Conflict Management against the Backdrop of Partial Legal Recognition

Authors: Merav Meiron-Goren, Orna Braun-Lewensohn, Tal Litvak-Hirsh

Abstract:

In recent years, there has been an increasing tendency towards separation and divorce in relationships. Nevertheless, many couples in a first marriage do not anticipate this as a probable possibility and do not make any preparation for it. Same sex couples establishing a family encounter a much more complicated situation than do heterosexual couples. Although there is a trend towards legal recognition of same sex marriage, many countries, including Israel, do not recognize it. The absence of legal recognition or the existence of partial recognition creates complexity for these couples. They have to fight for their right to establish a family, like the recognition of the biological child of a woman, as a child of her woman spouse too, or the option of surrogacy for a male couple who want children, and more. The lack of legal recognition is burden on the lives of these couples. In the absence of clear norms regarding the conduct of the family unit, the couples must define for themselves the family structure, and deal with everyday dilemmas that lack institutional solutions. This may increase the friction between the two couple members, and it is one of the factors that make it difficult for them to maintain the relationship. This complexity exists, perhaps even more so, in separation. The end of relationship is often accompanied by a deep crisis, causing pain and stress. In most cases, there are also other conflicts that must be settled. These are more complicated when rights are in doubt or do not exist at all. Complex issues for separating same sex couples may include matters of property, recognition of parenthood, and care and support for the children. The significance of the study is based on the fact that same sex relationships are becoming more and more widespread, and are an integral part of the society. Even so, there is still an absence of research focusing on such relationships and their ending. The objective of the study is to research the perceptions of same sex couples regarding the possibility of separation, preparing for it, conflict management and resolving disputes through the separation process. It is also important to understand the point of view of couples that have gone through separation, how they coped with the emotional and practical difficulties involved in the separation process. The doctoral research will use a qualitative research method in a phenomenological approach, based on semi-structured in-depth interviews. The interviewees will be divided into three groups- at the beginning of a relationship, during the separation crisis and after separation, with a time perspective, with about 10 couples from each group. The main theoretical model serving as the basis of the study will be the Lazarus and Folkman theory of coping with stress. This model deals with the coping process, including cognitive appraisal of an experience as stressful, appraisal of the coping resources, and using strategies of coping. The strategies are divided into two main groups, emotion-focused forms of coping and problem-focused forms of coping.

Keywords: conflict management, coping, legal recognition, same-sex relationship, separation

Procedia PDF Downloads 143
333 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments

Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora

Abstract:

Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.

Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver

Procedia PDF Downloads 316
332 Stress and Overload in Mothers and Fathers of Hospitalized Children: A Comparative Study

Authors: Alessandra Turini Bolsoni Silva, Nilson Rogério Da Silva

Abstract:

The hospitalization process for long periods and the experience of invasive and painful clinical procedures can trigger a set of stressors in children, family members and professionals, leading to stress. Mothers are, in general, the main caregivers and, therefore, have a high degree of sadness and stress with an impact on mental health. However, the father, in the face of the mother's absence, needs to assume other responsibilities such as domestic activities and healthy children in addition to work activities. In addition, he has to deal with changes in family and work relationships during the child's hospitalization, with disagreements and changes in the relationship with the partner, changes in the relationship with the children, and finding it difficult to reconcile the new tasks as a caregiver and work. A consequence of the hospitalization process is the interruption of the routine activities of both the child and the family members responsible for the care, who can go through stressful moments due to the consequences of family breakdown, attention focused only on the child and sleepless nights. In this sense, both the mother and the father can have their health affected by their child's hospitalization. The present study aims to compare the prevalence of stress and overload in mothers and fathers of hospitalized children, as well as possible associations with activities related to care. The participants were 10 fathers and 10 mothers of children hospitalized in a hospital located in a medium-sized city in the interior of São Paulo. Three instruments were used for data collection: 1) Script to characterize the participants; 2) The Lipp Stress Symptom Inventory (ISSL, 2000) 3) Zarit Burden Interview Protocol – ZBT. Contact was made with the management of the hospital in order to present the objectives of the project, then authorization was requested for the participation of the parents; after an agreement, the time and place were convenient for the participant to carry out the interview. Thus, they signed the Free and Informed Consent Term. Data were analyzed according to the instrument application manuals and organized in Figures and Tables. The results revealed that fathers and mothers have their family and professional routine affected by the hospitalization of their children, with the consequent presence of stress and overload indicators. However, the study points to a greater presence of stress and overload in mothers due to their role as the main caregiver, often interrupting their professional life to exercise care. In the case of the father, the routine is changed due to taking on household chores and taking care of the other children, with the professional life being less affected. It is hoped that the data can guide future interventions that promote and develop strategies that favor care and, at the same time, preserve the health of caregivers and that include mothers and fathers, considering that both are affected, albeit in a different way.

Keywords: stress, overload, caregivers, parents

Procedia PDF Downloads 66
331 Mapping Intertidal Changes Using Polarimetry and Interferometry Techniques

Authors: Khalid Omari, Rene Chenier, Enrique Blondel, Ryan Ahola

Abstract:

Northern Canadian coasts have vulnerable and very dynamic intertidal zones with very high tides occurring in several areas. The impact of climate change presents challenges not only for maintaining this biodiversity but also for navigation safety adaptation due to the high sediment mobility in these coastal areas. Thus, frequent mapping of shorelines and intertidal changes is of high importance. To help in quantifying the changes in these fragile ecosystems, remote sensing provides practical monitoring tools at local and regional scales. Traditional methods based on high-resolution optical sensors are often used to map intertidal areas by benefiting of the spectral response contrast of intertidal classes in visible, near and mid-infrared bands. Tidal areas are highly reflective in visible bands mainly because of the presence of fine sand deposits. However, getting a cloud-free optical data that coincide with low tides in intertidal zones in northern regions is very difficult. Alternatively, the all-weather capability and daylight-independence of the microwave remote sensing using synthetic aperture radar (SAR) can offer valuable geophysical parameters with a high frequency revisit over intertidal zones. Multi-polarization SAR parameters have been used successfully in mapping intertidal zones using incoherence target decomposition. Moreover, the crustal displacements caused by ocean tide loading may reach several centimeters that can be detected and quantified across differential interferometric synthetic aperture radar (DInSAR). Soil moisture change has a significant impact on both the coherence and the backscatter. For instance, increases in the backscatter intensity associated with low coherence is an indicator for abrupt surface changes. In this research, we present primary results obtained following our investigation of the potential of the fully polarimetric Radarsat-2 data for mapping an inter-tidal zone located on Tasiujaq on the south-west shore of Ungava Bay, Quebec. Using the repeat pass cycle of Radarsat-2, multiple seasonal fine quad (FQ14W) images are acquired over the site between 2016 and 2018. Only 8 images corresponding to low tide conditions are selected and used to build an interferometric stack of data. The observed displacements along the line of sight generated using HH and VV polarization are compared with the changes noticed using the Freeman Durden polarimetric decomposition and Touzi degree of polarization extrema. Results show the consistency of both approaches in their ability to monitor the changes in intertidal zones.

Keywords: SAR, degree of polarization, DInSAR, Freeman-Durden, polarimetry, Radarsat-2

Procedia PDF Downloads 137
330 Examination of Corrosion Durability Related to Installed Environments of Steel Bridges

Authors: Jin-Hee Ahn, Seok-Hyeon Jeon, Young-Bin Lee, Min-Gyun Ha, Yu-Chan Hong

Abstract:

Corrosion durability of steel bridges can be generally affected by atmospheric environments of bridge installation, since corrosion problem is related to environmental factors such as humidity, temperature, airborne salt, chemical components as SO₂, chlorides, etc. Thus, atmospheric environment condition should be measured to estimate corrosion condition of steel bridges as well as measurement of actual corrosion damage of structural members of steel bridge. Even in the same atmospheric environment, the corrosion environment may be different depending on the installation direction of structural members. In this study, therefore, atmospheric corrosion monitoring was conducted using atmospheric corrosion monitoring sensor, hygrometer, thermometer and airborne salt collection device to examine the corrosion durability of steel bridges. As a target steel bridge for corrosion durability monitoring, a cable-stayed bridge with truss steel members was selected. This cable-stayed bridge was located on the coast to connect the islands with the islands. Especially, atmospheric corrosion monitoring was carried out depending on structural direction of a cable-stayed bridge with truss type girders since it consists of structural members with various directions. For atmospheric corrosion monitoring, daily average electricity (corrosion current) was measured at each monitoring members to evaluate corrosion environments and corrosion level depending on structural members with various direction which have different corrosion environment in the same installed area. To compare corrosion durability connected with monitoring data depending on corrosion monitoring members, monitoring steel plate was additionally installed in same monitoring members. Monitoring steel plates of carbon steel was fabricated with dimension of 60mm width and 3mm thickness. And its surface was cleaned for removing rust on the surface by blasting, and its weight was measured before its installation on each structural members. After a 3 month exposure period on real atmospheric corrosion environment at bridge, surface condition of atmospheric corrosion monitoring sensors and monitoring steel plates were observed for corrosion damage. When severe deterioration of atmospheric corrosion monitoring sensors or corrosion damage of monitoring steel plates were found, they were replaced or collected. From 3month exposure tests in the actual steel bridge with various structural member with various direction, the rust on the surface of monitoring steel plate was found, and the difference in the corrosion rate was found depending on the direction of structural member from their visual inspection. And daily average electricity (corrosion current) was changed depending on the direction of structural member. However, it is difficult to identify the relative differences in corrosion durability of steel structural members using short-term monitoring results. After long exposure tests in this corrosion environments, it can be clearly evaluated the difference in corrosion durability depending on installed conditions of steel bridges. Acknowledgements: This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1B03028755).

Keywords: corrosion, atmospheric environments, steel bridge, monitoring

Procedia PDF Downloads 362
329 Repair of Thermoplastic Composites for Structural Applications

Authors: Philippe Castaing, Thomas Jollivet

Abstract:

As a result of their advantages, i.e. recyclability, weld-ability, environmental compatibility, long (continuous) fiber thermoplastic composites (LFTPC) are increasingly used in many industrial sectors (mainly automotive and aeronautic) for structural applications. Indeed, in the next ten years, the environmental rules will put the pressure on the use of new structural materials like composites. In aerospace, more than 50% of the damage are due to stress impact and 85% of damage are repaired on the fuselage (fuselage skin panels and around doors). With the arrival of airplanes mainly of composite materials, replacement of sections or panels seems difficult economically speaking and repair becomes essential. The objective of the present study is to propose a solution of repair to prevent the replacement the damaged part in thermoplastic composites in order to recover the initial mechanical properties. The classification of impact damage is not so not easy : talking about low energy impact (less than 35 J) can be totally wrong when high speed or weak thicknesses as well as thermoplastic resins are considered. Crash and perforation with higher energy create important damages and the structures are replaced without repairing, so we just consider here damages due to impacts at low energy that are as follows for laminates : − Transverse cracking; − Delamination; − Fiber rupture. At low energy, the damages are barely visible but can nevertheless reduce significantly the mechanical strength of the part due to resin cracks while few fiber rupture is observed. The patch repair solution remains the standard one but may lead to the rupture of fibers and consequently creates more damages. That is the reason why we investigate the repair of thermoplastic composites impacted at low energy. Indeed, thermoplastic resins are interesting as they absorb impact energy through plastic strain. The methodology is as follows: - impact tests at low energy on thermoplastic composites; - identification of the damage by micrographic observations; - evaluation of the harmfulness of the damage; - repair by reconsolidation according to the extent of the damage ; -validation of the repair by mechanical characterization (compression). In this study, the impacts tests are performed at various levels of energy on thermoplastic composites (PA/C, PEEK/C and PPS/C woven 50/50 and unidirectional) to determine the level of impact energy creating damages in the resin without fiber rupture. We identify the extent of the damage by US inspection and micrographic observations in the plane part thickness. The samples were in addition characterized in compression to evaluate the loss of mechanical properties. Then the strategy of repair consists in reconsolidating the damaged parts by thermoforming, and after reconsolidation the laminates are characterized in compression for validation. To conclude, the study demonstrates the feasibility of the repair for low energy impact on thermoplastic composites as the samples recover their properties. At a first step of the study, the “repair” is made by reconsolidation on a thermoforming press but we could imagine a process in situ to reconsolidate the damaged parts.

Keywords: aerospace, automotive, composites, compression, damages, repair, structural applications, thermoplastic

Procedia PDF Downloads 305
328 Mining Scientific Literature to Discover Potential Research Data Sources: An Exploratory Study in the Field of Haemato-Oncology

Authors: A. Anastasiou, K. S. Tingay

Abstract:

Background: Discovering suitable datasets is an important part of health research, particularly for projects working with clinical data from patients organized in cohorts (cohort data), but with the proliferation of so many national and international initiatives, it is becoming increasingly difficult for research teams to locate real world datasets that are most relevant to their project objectives. We present a method for identifying healthcare institutes in the European Union (EU) which may hold haemato-oncology (HO) data. A key enabler of this research was the bibInsight platform, a scientometric data management and analysis system developed by the authors at Swansea University. Method: A PubMed search was conducted using HO clinical terms taken from previous work. The resulting XML file was processed using the bibInsight platform, linking affiliations to the Global Research Identifier Database (GRID). GRID is an international, standardized list of institutions, including the city and country in which the institution exists, as well as a category of the main business type, e.g., Academic, Healthcare, Government, Company. Countries were limited to the 28 current EU members, and institute type to 'Healthcare'. An article was considered valid if at least one author was affiliated with an EU-based healthcare institute. Results: The PubMed search produced 21,310 articles, consisting of 9,885 distinct affiliations with correspondence in GRID. Of these articles, 760 were from EU countries, and 390 of these were healthcare institutes. One affiliation was excluded as being a veterinary hospital. Two EU countries did not have any publications in our analysis dataset. The results were analysed by country and by individual healthcare institute. Networks both within the EU and internationally show institutional collaborations, which may suggest a willingness to share data for research purposes. Geographical mapping can ensure that data has broad population coverage. Collaborations with industry or government may exclude healthcare institutes that may have embargos or additional costs associated with data access. Conclusions: Data reuse is becoming increasingly important both for ensuring the validity of results, and economy of available resources. The ability to identify potential, specific data sources from over twenty thousand articles in less than an hour could assist in improving knowledge of, and access to, data sources. As our method has not yet specified if these healthcare institutes are holding data, or merely publishing on that topic, future work will involve text mining of data-specific concordant terms to identify numbers of participants, demographics, study methodologies, and sub-topics of interest.

Keywords: data reuse, data discovery, data linkage, journal articles, text mining

Procedia PDF Downloads 117
327 Challenges Faced in Hospitality and Tourism Education: Rural Versus Urban Universities

Authors: Adelaide Rethabile Motshabi Pitso-Mbili

Abstract:

The disparity between universities in rural and urban areas of South Africa is still an ongoing issue. There are a lot of variations in these universities, such as the performance of the students and the lecturers, which is viewed as a worrying discrepancy related to knowledge gaps or educational inequality. According to research, rural students routinely perform worse than urban students in sub-Saharan Africa, and the disparity is wide when compared to the global average. This may be a result of the various challenges that universities in rural and urban areas face. Hence, the aim of this study was to compare the challenges faced by rural and urban universities, especially in hospitality and tourism programs, and recommend possible solutions. This study used a qualitative methodology and included focus groups and in-depth interviews. Eight focus groups of final-year students in hospitality and tourism programs from four institutions and four department heads of those programs participated in in-depth interviews. Additionally, the study was motivated by the teacher collaboration theory, which proposes that colleagues can help one another for the benefit of students and the institution. It was revealed that rural universities face more challenges than urban universities when it comes to hospitality and tourism education. The results of the interviews showed that universities in rural areas have a high staff turnover rate and offer fewer courses due to a lack of resources, such as the infrastructure, staff, equipment, and materials needed to give students hands-on training on the campus and in various hospitality and tourism programs. Urban universities, on the other hand, provide a variety of courses in the hospitality and tourism areas, and while resources are seldom an issue, they must deal with classes that have large enrolments and insufficient funding to support them all. Additionally, students in remote locations noted that having a lack of water and electricity makes it difficult for them to perform practical lessons. It is recommended that universities work together to collaborate or develop partnerships to help one another overcome obstacles and that universities in rural areas visit those in urban areas to observe how things are done there and to determine where they can improve themselves. The significance of the study is that it will truly bring rural and urban educational processes and practices into greater alignment of standards, benefits, and achievements; this will also help retain staff members within the rural area universities. The present study contributes to the literature by increasing the accumulation of knowledge on research topics, challenges, trends and innovation in hospitality and tourism education and setting forth an agenda for future research. The current study adds to the body of literature by expanding the accumulation of knowledge on research topics that contribute to trends and innovations in hospitality and tourism education and by laying out a plan for future research.

Keywords: hospitality and tourism education, rural and urban universities, collaboration, teacher and student performance, educational inequality

Procedia PDF Downloads 63
326 Fabrication of SnO₂ Nanotube Arrays for Enhanced Gas Sensing Properties

Authors: Hsyi-En Cheng, Ying-Yi Liou

Abstract:

Metal-oxide semiconductor (MOS) gas sensors are widely used in the gas-detection market due to their high sensitivity, fast response, and simple device structures. However, the high working temperature of MOS gas sensors makes them difficult to integrate with the appliance or consumer goods. One-dimensional (1-D) nanostructures are considered to have the potential to lower their working temperature due to their large surface-to-volume ratio, confined electrical conduction channels, and small feature sizes. Unfortunately, the difficulty of fabricating 1-D nanostructure electrodes has hindered the development of low-temperature MOS gas sensors. In this work, we proposed a method to fabricate nanotube-arrays, and the SnO₂ nanotube-array sensors with different wall thickness were successfully prepared and examined. The fabrication of SnO₂ nanotube arrays incorporates the techniques of barrier-free anodic aluminum oxide (AAO) template and atomic layer deposition (ALD) of SnO₂. First, 1.0 µm Al film was deposited on ITO glass substrate by electron beam evaporation and then anodically oxidized by five wt% phosphoric acid solution at 5°C under a constant voltage of 100 V to form porous aluminum oxide. As the Al film was fully oxidized, a 15 min over anodization and a 30 min post chemical dissolution were used to remove the barrier oxide at the bottom end of pores to generate a barrier-free AAO template. The ALD using reactants of TiCl4 and H₂O was followed to grow a thin layer of SnO₂ on the template to form SnO₂ nanotube arrays. After removing the surface layer of SnO₂ by H₂ plasma and dissolving the template by 5 wt% phosphoric acid solution at 50°C, upright standing SnO₂ nanotube arrays on ITO glass were produced. Finally, Ag top electrode with line width of 5 μm was printed on the nanotube arrays to form SnO₂ nanotube-array sensor. Two SnO₂ nanotube-arrays with wall thickness of 30 and 60 nm were produced in this experiment for the evaluation of gas sensing ability. The flat SnO₂ films with thickness of 30 and 60 nm were also examined for comparison. The results show that the properties of ALD SnO₂ films were related to the deposition temperature. The films grown at 350°C had a low electrical resistivity of 3.6×10-3 Ω-cm and were, therefore, used for the nanotube-array sensors. The carrier concentration and mobility of the SnO₂ films were characterized by Ecopia HMS-3000 Hall-effect measurement system and were 1.1×1020 cm-3 and 16 cm3/V-s, respectively. The electrical resistance of SnO₂ film and nanotube-array sensors in air and in a 5% H₂-95% N₂ mixture gas was monitored by Pico text M3510A 6 1/2 Digits Multimeter. It was found that, at 200 °C, the 30-nm-wall SnO₂ nanotube-array sensor performs the highest responsivity to 5% H₂, followed by the 30-nm SnO₂ film sensor, the 60-nm SnO₂ film sensor, and the 60-nm-wall SnO₂ nanotube-array sensor. However, at temperatures below 100°C, all the samples were insensitive to the 5% H₂ gas. Further investigation on the sensors with thinner SnO₂ is necessary for improving the sensing ability at temperatures below 100 °C.

Keywords: atomic layer deposition, nanotube arrays, gas sensor, tin dioxide

Procedia PDF Downloads 243
325 Sensitivity and Uncertainty Analysis of Hydrocarbon-In-Place in Sandstone Reservoir Modeling: A Case Study

Authors: Nejoud Alostad, Anup Bora, Prashant Dhote

Abstract:

Kuwait Oil Company (KOC) has been producing from its major reservoirs that are well defined and highly productive and of superior reservoir quality. These reservoirs are maturing and priority is shifting towards difficult reservoir to meet future production requirements. This paper discusses the results of the detailed integrated study for one of the satellite complex field discovered in the early 1960s. Following acquisition of new 3D seismic data in 1998 and re-processing work in the year 2006, an integrated G&G study was undertaken to review Lower Cretaceous prospectivity of this reservoir. Nine wells have been drilled in the area, till date with only three wells showing hydrocarbons in two formations. The average oil density is around 300API (American Petroleum Institute), and average porosity and water saturation of the reservoir is about 23% and 26%, respectively. The area is dissected by a number of NW-SE trending faults. Structurally, the area consists of horsts and grabens bounded by these faults and hence compartmentalized. The Wara/Burgan formation consists of discrete, dirty sands with clean channel sand complexes. There is a dramatic change in Upper Wara distributary channel facies, and reservoir quality of Wara and Burgan section varies with change of facies over the area. So predicting reservoir facies and its quality out of sparse well data is a major challenge for delineating the prospective area. To characterize the reservoir of Wara/Burgan formation, an integrated workflow involving seismic, well, petro-physical, reservoir and production engineering data has been used. Porosity and water saturation models are prepared and analyzed to predict reservoir quality of Wara and Burgan 3rd sand upper reservoirs. Subsequently, boundary conditions are defined for reservoir and non-reservoir facies by integrating facies, porosity and water saturation. Based on the detailed analyses of volumetric parameters, potential volumes of stock-tank oil initially in place (STOIIP) and gas initially in place (GIIP) were documented after running several probablistic sensitivity analysis using Montecalro simulation method. Sensitivity analysis on probabilistic models of reservoir horizons, petro-physical properties, and oil-water contacts and their effect on reserve clearly shows some alteration in the reservoir geometry. All these parameters have significant effect on the oil in place. This study has helped to identify uncertainty and risks of this prospect particularly and company is planning to develop this area with drilling of new wells.

Keywords: original oil-in-place, sensitivity, uncertainty, sandstone, reservoir modeling, Monte-Carlo simulation

Procedia PDF Downloads 199
324 The Development of Local-Global Perceptual Bias across Cultures: Examining the Effects of Gender, Education, and Urbanisation

Authors: Helen J. Spray, Karina J. Linnell

Abstract:

Local-global bias in adulthood is strongly dependent on environmental factors and a global bias is not the universal characteristic of adult perception it was once thought to be: whilst Western adults typically demonstrate a global bias, Namibian adults living in traditional villages possess a strong local bias. Furthermore, environmental effects on local-global bias have been shown to be highly gender-specific; whereas urbanisation promoted a global bias in urbanised Namibian women but not men, education promoted a global bias in urbanised Namibian men but not women. Adult populations, however, provide only a snapshot of the gene-environment interactions which shape perceptual bias. Yet, to date, there has been little work on the development of local-global bias across environmental settings. In the current study, local-global bias was assessed using a similarity-matching task with Navon figures in children aged between 4 and 15 years from across three populations: traditional Namibians, urban Namibians, and urban British. For the two Namibian groups, measures of urbanisation and education were obtained. Data were subjected to both between-group and within-group analyses. Between-group analyses compared developmental trajectories across population and gender. These analyses revealed a global bias from even as early as 4 in the British sample, and showed that the developmental onset of a global bias is not fixed. Urbanised Namibian children ultimately developed a global bias that was indistinguishable from British children; however, a global bias did not emerge until much later in development. For all populations, the greatest developmental effects were observed directly following the onset of formal education. No overall gender effects were observed; however, there was a significant gender by age interaction which was difficult to reconcile with existing biological-level accounts of gender differences in the development of local-global bias. Within-group analyses compared the effects of urbanisation and education on local-global bias for traditional and urban Namibian boys and girls separately. For both traditional and urban boys, education mediated all effects of age and urbanisation; however, this was not the case for girls. Traditional Namibian girls retained a local bias regardless of age, education, or urbanisation, and in urbanised girls, the development of a global bias was not attributable to any one factor specifically. These results are broadly consistent with aforementioned findings that education promoted a global bias in urbanised Namibian men but not women. The development of local-global bias does not follow a fixed trajectory but is subject to environmental control. Understanding how variability in the development of local-global bias might arise, particularly in the context of gender, may have far-reaching implications. For example, a number of educationally important cognitive functions (e.g., spatial ability) are known to show consistent gender differences in childhood and local-global bias may mediate some of these effects. With education becoming an increasingly prevalent force across much of the developing world it will be important to understand the processes that underpin its effects and their implications.

Keywords: cross-cultural, development, education, gender, local-global bias, perception, urbanisation, urbanization

Procedia PDF Downloads 141
323 Carbonyl Iron Particles Modified with Pyrrole-Based Polymer and Electric and Magnetic Performance of Their Composites

Authors: Miroslav Mrlik, Marketa Ilcikova, Martin Cvek, Josef Osicka, Michal Sedlacik, Vladimir Pavlinek, Jaroslav Mosnacek

Abstract:

Magnetorheological elastomers (MREs) are a unique type of materials consisting of two components, magnetic filler, and elastomeric matrix. Their properties can be tailored upon application of an external magnetic field strength. In this case, the change of the viscoelastic properties (viscoelastic moduli, complex viscosity) are influenced by two crucial factors. The first one is magnetic performance of the particles and the second one is off-state stiffness of the elastomeric matrix. The former factor strongly depends on the intended applications; however general rule is that higher magnetic performance of the particles provides higher MR performance of the MRE. Since magnetic particles possess low stability properties against temperature and acidic environment, several methods how to improve these drawbacks have been developed. In the most cases, the preparation of the core-shell structures was employed as a suitable method for preservation of the magnetic particles against thermal and chemical oxidations. However, if the shell material is not single-layer substance, but polymer material, the magnetic performance is significantly suppressed, due to the in situ polymerization technique, when it is very difficult to control the polymerization rate and the polymer shell is too thick. The second factor is the off-state stiffness of the elastomeric matrix. Since the MR effectivity is calculated as the relative value of the elastic modulus upon magnetic field application divided by elastic modulus in the absence of the external field, also the tuneability of the cross-linking reaction is highly desired. Therefore, this study is focused on the controllable modification of magnetic particles using a novel monomeric system based on 2-(1H-pyrrol-1-yl)ethyl methacrylate. In this case, the short polymer chains of different chain lengths and low polydispersity index will be prepared, and thus tailorable stability properties can be achieved. Since the relatively thin polymer chains will be grafted on the surface of magnetic particles, their magnetic performance will be affected only slightly. Furthermore, also the cross-linking density will be affected, due to the presence of the short polymer chains. From the application point of view, such MREs can be utilized for, magneto-resistors, piezoresistors or pressure sensors especially, when the conducting shell on the magnetic particles will be created. Therefore, the selection of the pyrrole-based monomer is very crucial and controllably thin layer of conducting polymer can be prepared. Finally, such composite particle consisting of magnetic core and conducting shell dispersed in elastomeric matrix can find also the utilization in shielding application of electromagnetic waves.

Keywords: atom transfer radical polymerization, core-shell, particle modification, electromagnetic waves shielding

Procedia PDF Downloads 211
322 Active Abdominal Compression Device for Treatment of Orthostatic Hypotension

Authors: Vishnu Emani, Andreas Escher, Ellen Roche

Abstract:

Background: Orthostatic hypotension (OH) is an autonomic disorder marked by a sudden drop in blood pressure upon standing resulting from autonomic dysfunction. OH is especially prevalent in elderly populations, affecting more than 30% of Americans over the age of 70. OH is one of the most significant risk factors for accidental falls in elderly populations, making it a crucial focus for medical and device therapies. Pharmacologic therapy with midodrine and fludrocortisone may alleviate hypotension but have significant adverse side effects. Abdominal passive compression devices (binders) are more effective than lower extremity compression stockings at mitigating postural hypotension, by improving venous return to the heart. However, abdominal binders are difficult to don and uncomfortable to wear, leading to poor compliance. A disadvantage of passive compression devices is their inability to selectively compress during the crucial moment of standing. We have recently developed an active compression device that applies external pressure on the abdomen during the transition from a prone to a supine position and conducted initial prototype testing. Methods: An active abdominal compression device was developed utilizing a simple, servo-driven straptightening mechanism to supply tension onto foam fabric, which applies pressure to the abdomen. Healthy volunteers (n=5) were utilized for prototype testing and were subjected to three conditions: no compression, passive compression (i.e. standard abdominal binder), and active compression (device prototype). Abdominal applied pressure during device activation was measured by a strain-gauge manometer placed between the skin and binder. Systolic (SBP) and mean (MAP) arterial blood pressure was measured by standard blood pressure cuff in supine position followed by repeat measurements at 1 minute intervals for 5 minutes following upright position. A survey tool was administered to determine scores (1-10) for comfort and ease of donning abdominal binders. Results: Abdominal pressure increased from 0 to 15±3 mmHg upon device activation for both passive and active compression devices. During the transition from supine to an upright position, both active and passive compression devices demonstrated significantly higher MAP compared to the no-compression condition (67±4, 68±5, 62±5 respectively, P<0.05), but there was no statistically significant difference in SBP or MAP when comparing active to passive compression. Active compression demonstrated significantly higher comfort scores (8.3±1) compared to passive compression (3.2±2) but lower when compared to no compression (10). Subjects universally reported that active compression device was easier to don compared to passive device. Conclusions: Active or passive abdominal compression prevents hypotension associated with postural changes. Active compression is associated with increased comfort and ease of donning compared to passive compression devices. Future trials are warranted to investigate the efficacy of our device in patients with OH.

Keywords: orthostatic hypotension, compression binder, abdominal binder, active abdominal compression

Procedia PDF Downloads 28
321 Understanding Face-to-Face Household Gardens’ Profitability and Local Economic Opportunity Pathways

Authors: Annika Freudenberger, Sin Sokhong

Abstract:

In just a few years, the Face-to-Face Victory Gardens Project (F2F) in Cambodia has developed a high-impact project that has provided immediate and tangible benefits to local families. This has been accomplished with a relatively hands-off approach that relies on households’ own motivation and personal investments of time and resources -which is both unique and impressive in the landscape of NGO and government initiatives in the area. Households have been growing food both for their own consumption and to sell or exchange. Not all targeted beneficiaries are equally motivated and maximizing their involvement, but there is a clear subset of households -particularly those who serve as facilitators- whose circumstances have been transformed as a result of F2F. A number of household factors and contextual economic factors affect families’ income generation opportunities. All the households we spoke with became involved with F2F with the goal of selling some proportion of their produce (i.e., not exclusively for their own consumption). For some, this income is marginal and supplemental to their core household income; for others, it is substantial and transformative. Some engage directly with customers/buyers in their immediate community, while others sell in larger nearby markets, and others link up with intermediary vendors. All struggle, to a certain extent, to compete in a local economy flooded with cheap produce imported from large-scale growers in neighboring provinces, Thailand, and Vietnam, although households who grow and sell herbs and greens popular in Khmer cuisine have found a stronger local market. Some are content with the scale of their garden, the income they make, and the current level of effort required to maintain it; others would like to expand but are faced with land constraints and water management challenges. Households making a substantial income from selling their products have achieved success in different ways, making it difficult to pinpoint a clear “model” for replication. Within our small sample size of interviewees, it seems as though the families with a clear passion for their gardens and high motivation to work hard to bring their products to market have succeeded in doing so. Khmer greens and herbs have been the most successful; they are not high-value crops, but they are fairly easy to grow, and there is a constant demand. These crops are also not imported as much, so prices are more stable than those of crops such as long beans. Although we talked to a limited number of individuals, it also appears as though successful families either restricted their crops to those that would grow well in drought or flood conditions (depending on which they are affected by most); or benefit already from water management infrastructure such as water tanks which helps them diversify their crops and helps them build their resilience.

Keywords: food security, Victory Gardens, nutrition, Cambodia

Procedia PDF Downloads 59
320 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 157
319 The Impact of Emotional Intelligence on Organizational Performance

Authors: El Ghazi Safae, Cherkaoui Mounia

Abstract:

Within companies, emotions have been forgotten as key elements of successful management systems. Seen as factors which disturb judgment, make reckless acts or affect negatively decision-making. Since management systems were influenced by the Taylorist worker image, that made the work regular and plain, and considered employees as executing machines. However, recently, in globalized economy characterized by a variety of uncertainties, emotions are proved as useful elements, even necessary, to attend high-level management. The work of Elton Mayo and Kurt Lewin reveals the importance of emotions. Since then emotions start to attract considerable attention. These studies have shown that emotions influence, directly or indirectly, many organization processes. For example, the quality of interpersonal relationships, job satisfaction, absenteeism, stress, leadership, performance and team commitment. Emotions became fundamental and indispensable to individual yield and so on to management efficiency. The idea that a person potential is associated to Intellectual Intelligence, measured by the IQ as the main factor of social, professional and even sentimental success, was the main problematic that need to be questioned. The literature on emotional intelligence has made clear that success at work does not only depend on intellectual intelligence but also other factors. Several researches investigating emotional intelligence impact on performance showed that emotionally intelligent managers perform more, attain remarkable results, able to achieve organizational objectives, impact the mood of their subordinates and create a friendly work environment. An improvement in the emotional intelligence of managers is therefore linked to the professional development of the organization and not only to the personal development of the manager. In this context, it would be interesting to question the importance of emotional intelligence. Does it impact organizational performance? What is the importance of emotional intelligence and how it impacts organizational performance? The literature highlighted that measurement and conceptualization of emotional intelligence are difficult to define. Efforts to measure emotional intelligence have identified three models that are more prominent: the mixed model, the ability model, and the trait model. The first is considered as cognitive skill, the second relates to the mixing of emotional skills with personality-related aspects and the latter is intertwined with personality traits. But, despite strong claims about the importance of emotional intelligence in the workplace, few studies have empirically examined the impact of emotional intelligence on organizational performance, because even though the concept of performance is at the heart of all evaluation processes of companies and organizations, we observe that performance remains a multidimensional concept and many authors insist about the vagueness that surrounds the concept. Given the above, this article provides an overview of the researches related to emotional intelligence, particularly focusing on studies that investigated the impact of emotional intelligence on organizational performance to contribute to the emotional intelligence literature and highlight its importance and show how it impacts companies’ performance.

Keywords: emotions, performance, intelligence, firms

Procedia PDF Downloads 108
318 Using the ISO 9705 Room Corner Test for Smoke Toxicity Quantification of Polyurethane

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Polyurethane (PU) foam is typically sold as acoustic foam that is often used as sound insulation in settings such as night clubs and bars. As a construction product, PU is tested by being glued to the walls and ceiling of the ISO 9705 room corner test room. However, when heat is applied to PU foam, it melts and burns as a pool fire due to it being a thermoplastic. The current test layout is unable to accurately measure mass loss and doesn’t allow for the material to burn as a pool fire without seeping out of the test room floor. The lack of mass loss measurement means gas yields pertaining to smoke toxicity analysis can’t be calculated, which makes data comparisons from any other material or test method difficult. Additionally, the heat release measurements are not representative of the actual measurements taken as a lot of the material seeps through the floor (when a tray to catch the melted material is not used). This research aimed to modify the ISO 9705 test to provide the ability to measure mass loss to allow for better calculation of gas yields and understanding of decomposition. It also aimed to accurately measure smoke toxicity in both the doorway and duct and enable dilution factors to be calculated. Finally, the study aimed to examine if doubling the fuel loading would force under-ventilated flaming. The test layout was modified to be a combination of the SBI (single burning item) test set up inside oof the ISO 9705 test room. Polyurethane was tested in two different ways with the aim of altering the ventilation condition of the tests. Test one was conducted using 1 x SBI test rig aiming for well-ventilated flaming. Test two was conducted using 2 x SBI rigs (facing each other inside the test room) (doubling the fuel loading) aiming for under-ventilated flaming. The two different configurations used were successful in achieving both well-ventilated flaming and under-ventilated flaming, shown by the measured equivalence ratios (measured using a phi meter designed and created for these experiments). The findings show that doubling the fuel loading will successfully force under-ventilated flaming conditions to be achieved. This method can therefore be used when trying to replicate post-flashover conditions in future ISO 9705 room corner tests. The radiative heat generated by the two SBI rigs facing each other facilitated a much higher overall heat release resulting in a more severe fire. The method successfully allowed for accurate measurement of smoke toxicity produced from the PU foam in terms of simple gases such as oxygen depletion, CO and CO2. Overall, the proposed test modifications improve the ability to measure the smoke toxicity of materials in different fire conditions on a large-scale.

Keywords: flammability, ISO9705, large-scale testing, polyurethane, smoke toxicity

Procedia PDF Downloads 76
317 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 107
316 Crop Breeding for Low Input Farming Systems and Appropriate Breeding Strategies

Authors: Baye Berihun Getahun, Mulugeta Atnaf Tiruneh, Richard G. F. Visser

Abstract:

Resource-poor farmers practice low-input farming systems, and yet, most breeding programs give less attention to this huge farming system, which serves as a source of food and income for several people in developing countries. The high-input conventional breeding system appears to have failed to adequately meet the needs and requirements of 'difficult' environments operating under this system. Moreover, the unavailability of resources for crop production is getting for their peaks, the environment is maltreated by excessive use of agrochemicals, crop productivity reaches its plateau stage, particularly in the developed nations, the world population is increasing, and food shortage sustained to persist for poor societies. In various parts of the world, genetic gain at the farmers' level remains low which could be associated with low adoption of crop varieties, which have been developed under high input systems. Farmers usually use their local varieties and apply minimum inputs as a risk-avoiding and cost-minimizing strategy. This evidence indicates that the conventional high-input plant breeding system has failed to feed the world population, and the world is moving further away from the United Nations' goals of ending hunger, food insecurity, and malnutrition. In this review, we discussed the rationality of focused breeding programs for low-input farming systems and, the technical aspect of crop breeding that accommodates future food needs and its significance for developing countries in the decreasing scenario of resources required for crop production. To this end, the application of exotic introgression techniques like polyploidization, pan-genomics, comparative genomics, and De novo domestication as a pre-breeding technique has been discussed in the review to exploit the untapped genetic diversity of the crop wild relatives (CWRs). Desired recombinants developed at the pre-breeding stage are exploited through appropriate breeding approaches such as evolutionary plant breeding (EPB), rhizosphere-related traits breeding, and participatory plant breeding approaches. Populations advanced through evolutionary breeding like composite cross populations (CCPs) and rhizosphere-associated traits breeding approach that provides opportunities for improving abiotic and biotic soil stress, nutrient acquisition capacity, and crop microbe interaction in improved varieties have been reviewed. Overall, we conclude that low input farming system is a huge farming system that requires distinctive breeding approaches, and the exotic pre-breeding introgression techniques and the appropriate breeding approaches which deploy the skills and knowledge of both breeders and farmers are vital to develop heterogeneous landrace populations, which are effective for farmers practicing low input farming across the world.

Keywords: low input farming, evolutionary plant breeding, composite cross population, participatory plant breeding

Procedia PDF Downloads 55