Search results for: code acquisition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2393

Search results for: code acquisition

53 Cognitive Decline in People Living with HIV in India and Correlation with Neurometabolites Using 3T Magnetic Resonance Spectroscopy (MRS): A Cross-Sectional Study

Authors: Kartik Gupta, Virendra Kumar, Sanjeev Sinha, N. Jagannathan

Abstract:

Introduction: A significant number of patients having human immunodeficiency virus (HIV) infection show a neurocognitive decline (NCD) ranging from minor cognitive impairment to severe dementia. The possible causes of NCD in HIV-infected patients include brain injury by HIV before cART, neurotoxic viral proteins and metabolic abnormalities. In the present study, we compared the level of NCD in asymptomatic HIV-infected patients with changes in brain metabolites measured by using magnetic resonance spectroscopy (MRS). Methods: 43 HIV-positive patients (30 males and 13 females) coming to ART center of the hospital and HIV-seronegative healthy subjects were recruited for the study. All the participants completed MRI and MRS examination, detailed clinical assessments and a battery of neuropsychological tests. All the MR investigations were carried out at 3.0T MRI scanner (Ingenia/Achieva, Philips, Netherlands). MRI examination protocol included the acquisition of T2-weighted imaging in axial, coronal and sagittal planes, T1-weighted, FLAIR, and DWI images in the axial plane. Patients who showed any apparent lesion on MRI were excluded from the study. T2-weighted images in three orthogonal planes were used to localize the voxel in left frontal lobe white matter (FWM) and left basal ganglia (BG) for single voxel MRS. Single voxel MRS spectra were acquired with a point resolved spectroscopy (PRESS) localization pulse sequence at an echo time (TE) of 35 ms and a repetition time (TR) of 2000 ms with 64 or 128 scans. Automated preprocessing and determination of absolute concentrations of metabolites were estimated using LCModel by water scaling method and the Cramer-Rao lower bounds for all metabolites analyzed in the study were below 15\%. Levels of total N-acetyl aspartate (tNAA), total choline (tCho), glutamate + glutamine (Glx), total creatine (tCr), were measured. Cognition was tested using a battery of tests validated for Indian population. The cognitive domains tested were the memory, attention-information processing, abstraction-executive, simple and complex perceptual motor skills. Z-scores normalized according to age, sex and education standard were used to calculate dysfunction in these individual domains. The NCD was defined as dysfunction with Z-score ≤ 2 in at least two domains. One-way ANOVA was used to compare the difference in brain metabolites between the patients and healthy subjects. Results: NCD was found in 23 (53%) patients. There was no significant difference in age, CD4 count and viral load between the two groups. Maximum impairment was found in the domains of memory and simple motor skills i.e., 19/43 (44%). The prevalence of deficit in attention-information processing, complex perceptual motor skills and abstraction-executive function was 37%, 35%, 33% respectively. Subjects with NCD had a higher level of Glutamate in the Frontal region (8.03 ± 2.30 v/s. 10.26 ± 5.24, p-value 0.001). Conclusion: Among newly diagnosed, ART-naïve retroviral disease patients from India, cognitive decline was found in 53\% patients using tests validated for this population. Those with neurocognitive decline had a significantly higher level of Glutamate in the left frontal region. There was no significant difference in age, CD4 count and viral load at initiation of ART between the two groups.

Keywords: HIV, neurocognitive decline, neurometabolites, magnetic resonance spectroscopy

Procedia PDF Downloads 169
52 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy

Authors: M. Regina Carreira-Lopez

Abstract:

Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.

Keywords: hypernymy, information retrieval, lightweight ontology, resonance

Procedia PDF Downloads 102
51 Exploring a Cross-Sectional Analysis Defining Social Work Leadership Competencies in Social Work Education and Practice

Authors: Trevor Stephen, Joshua D. Aceves, David Guyer, Jona Jacobson

Abstract:

As a profession, social work has much to offer individuals, groups, and organizations. A multidisciplinary approach to understanding and solving complex challenges and a commitment to developing and training ethical practitioners outlines characteristics of a profession embedded with leadership skills. This presentation will take an overview of the historical context of social work leadership, examine social work as a unique leadership model composed of its qualities and theories that inform effective leadership capability as it relates to our code of ethics. Reflect critically on leadership theories and their foundational comparison. Finally, a look at recommendations and implementation to social work education and practice. Similar to defining leadership, there is no universally accepted definition of social work leadership. However, some distinct traits and characteristics are essential. Recent studies help set the stage for this research proposal because they measure views on effective social work leadership among social work and non-social leaders and followers. However, this research is interested in working backward from that approach and examining social workers' leadership preparedness perspectives based solely on social work training, competencies, values, and ethics. Social workers understand how to change complex structures and challenge resistance to change to improve the well-being of organizations and those they serve. Furthermore, previous studies align with the idea of practitioners assessing their skill and capacity to engage in leadership but not to lead. In addition, this research is significant because it explores aspiring social work leaders' competence to translate social work practice into direct leadership skills. The research question seeks to answer whether social work training and competencies are sufficient to determine whether social workers believe they possess the capacity and skill to engage in leadership practice. Aim 1: Assess whether social workers have the capacity and skills to assume leadership roles. Aim 2: Evaluate how the development of social workers is sufficient in defining leadership. This research intends to reframe the misconception that social workers do not possess the capacity and skills to be effective leaders. On the contrary, social work encompasses a framework dedicated to lifelong development and growth. Social workers must be skilled, competent, ethical, supportive, and empathic. These are all qualities and traits of effective leadership, whereas leaders are in relation with others and embody partnership and collaboration with followers and stakeholders. The proposed study is a cross-sectional quasi-experimental survey design that will include the distribution of a multi-level social work leadership model and assessment tool. The assessment tool aims to help define leadership in social work using a Likert scale model. A cross-sectional research design is appropriate for answering the research questions because the measurement survey will help gather data using a structured tool. Other than the proposed social work leadership measurement tool, there is no other mechanism based on social work theory and designed to measure the capacity and skill of social work leadership.

Keywords: leadership competencies, leadership education, multi-level social work leadership model, social work core values, social work leadership, social work leadership education, social work leadership measurement tool

Procedia PDF Downloads 147
50 Special Educational Needs Coordinators in England: Changemakers in Mainstream School Settings

Authors: Saneeya Qureshi

Abstract:

This paper reports doctoral research into the impact of Special Educational Needs Coordinators (SENCOs) on teachers in England, UK. Since 1994, it has been compulsory for all mainstream schools in the UK to have a SENCO who co-ordinates assessment and provision for supporting pupils with Special Educational Needs (SEN), helping teachers to develop and implement optimal SEN planning and resources. SENCOs’ roles have evolved as various policies continually redefined SEN provision, impacting their positioning within the school hierarchical structure. SENCOs in England are increasingly recognised as key members of school senior management teams. In this paper, It will be argued that despite issues around the transformative ‘professionalisation’ of their role, and subsequent conflict around boundaries and power relations, SENCOs enhance teachers’ abilities in terms of delivering optimal SEN provision. There is a significant international dimension to the issue: a similar role in respect of SEN management already exists in countries such as Ireland, Finland and Singapore, whilst in other countries, such as Italy and India, the introduction of a role similar to that of a SENCO is currently under discussion. The research question addressed is: do SENCOs enhance teachers’ abilities to be effective teachers of children with Special Educational Needs? The theoretical framework of the project is that of interpretivism, as it is acknowledged that there are contexts and realities are social constructions. The study applied a mixed method approach consisting of two phases. The first phase involved a purposive survey (n=42) of 223 primary school SENCOs, which enabled a deeper insight into SENCOs’ perceptions of their roles in relation to teachers. The second phase consisted of semi-structured interviews (n=36) of SENCOs, teachers and head teachers, in addition to school SEN-related documentation scrutiny. ‘Trustworthiness’ was accomplished through data and methodological triangulation, in addition to a rigorous process of coding and thematic analysis. The research was informed by an Ethical Code as per national guidelines. Research findings point to the evolutionary aspect of the SENCO role having engendered a culture of expectations amongst practitioners, as SENCOs transition from being ‘fixers’ to being ‘enablers’ of teachers. Outcomes indicate that SENCOs can empower teaching staff through the dissemination of specialist knowledge. However, there must be resources clearly identified for such dissemination to take place. It is imperative that both SENCOs and teachers alike address the issue of absolution of responsibility that arises when the ownership and accountability for the planning and implementation of SEN provision are not clarified so as to ensure the promotion of a positive school ethos around inclusive practices. Optimal outcomes through effective SEN interventions and teaching practices are positively correlated with the inclusion of teachers in the planning and execution of SEN provisions. An international audience can consider how the key findings are being manifest in a global context, with reference to their own educational settings. Research outcomes can aid the development of specific competencies needed to shape optimal inclusive educational settings in accordance with the official global priorities pertaining to inclusion.

Keywords: inclusion, school professionals, school leadership, special educational needs (SEN), special educational needs coordinators (SENCOs)

Procedia PDF Downloads 178
49 A Digital Clone of an Irrigation Network Based on Hardware/Software Simulation

Authors: Pierre-Andre Mudry, Jean Decaix, Jeremy Schmid, Cesar Papilloud, Cecile Munch-Alligne

Abstract:

In most of the Swiss Alpine regions, the availability of water resources is usually adequate even in times of drought, as evidenced by the 2003 and 2018 summers. Indeed, important natural stocks are for the moment available in the form of snow and ice, but the situation is likely to change in the future due to global and regional climate change. In addition, alpine mountain regions are areas where climate change will be felt very rapidly and with high intensity. For instance, the ice regime of these regions has already been affected in recent years with a modification of the monthly availability and extreme events of precipitations. The current research, focusing on the municipality of Val de Bagnes, located in the canton of Valais, Switzerland, is part of a project led by the Altis company and achieved in collaboration with WSL, BlueArk Entremont, and HES-SO Valais-Wallis. In this region, water occupies a key position notably for winter and summer tourism. Thus, multiple actors want to apprehend the future needs and availabilities of water, on both the 2050 and 2100 horizons, in order to plan the modifications to the water supply and distribution networks. For those changes to be salient and efficient, a good knowledge of the current water distribution networks is of most importance. In the current case, the water drinking network is well documented, but this is not the case for the irrigation one. Since the water consumption for irrigation is ten times higher than for drinking water, data acquisition on the irrigation network is a major point to determine future scenarios. This paper first presents the instrumentation and simulation of the irrigation network using custom-designed IoT devices, which are coupled with a digital clone simulated to reduce the number of measuring locations. The developed IoT ad-hoc devices are energy-autonomous and can measure flows and pressures using industrial sensors such as calorimetric water flow meters. Measurements are periodically transmitted using the LoRaWAN protocol over a dedicated infrastructure deployed in the municipality. The gathered values can then be visualized in real-time on a dashboard, which also provides historical data for analysis. In a second phase, a digital clone of the irrigation network was modeled using EPANET, a software for water distribution systems that performs extended-period simulations of flows and pressures in pressurized networks composed of reservoirs, pipes, junctions, and sinks. As a preliminary work, only a part of the irrigation network was modelled and validated by comparisons with the measurements. The simulations are carried out by imposing the consumption of water at several locations. The validation is performed by comparing the simulated pressures are different nodes with the measured ones. An accuracy of +/- 15% is observed on most of the nodes, which is acceptable for the operator of the network and demonstrates the validity of the approach. Future steps will focus on the deployment of the measurement devices on the whole network and the complete modelling of the network. Then, scenarios of future consumption will be investigated. Acknowledgment— The authors would like to thank the Swiss Federal Office for Environment (FOEN), the Swiss Federal Office for Agriculture (OFAG) for their financial supports, and ALTIS for the technical support, this project being part of the Swiss Pilot program 'Adaptation aux changements climatiques'.

Keywords: hydraulic digital clone, IoT water monitoring, LoRaWAN water measurements, EPANET, irrigation network

Procedia PDF Downloads 116
48 Collagen/Hydroxyapatite Compositions Doped with Transitional Metals for Bone Tissue Engineering Applications

Authors: D. Ficai, A. Ficai, D. Gudovan, I. A. Gudovan, I. Ardelean, R. Trusca, E. Andronescu, V. Mitran, A. Cimpean

Abstract:

In the last years, scientists struggled hardly to mimic bone structures to develop implants and biostructures which present higher biocompatibility and reduced rejection rate. One way to obtain this goal is to use similar materials as that of bone, namely collagen/hydroxyapatite composite materials. However, it is very important to tailor both compositions but also the microstructure of the bone that would ensure both the optimal osteointegartion and the mechanical properties required by the application. In this study, new collagen/hydroxyapatites composite materials doped with Cu, Li, Mn, Zn were successfully prepared. The synthesis method is described below: weight the Ca(OH)₂ mass, i.e., 7,3067g, and ZnCl₂ (0.134g), CuSO₄ (0.159g), LiCO₃ (0.133g), MnCl₂.4H₂O (0.1971g), and suspend in 100ml distilled water under magnetic stirring. The solution thus obtained is added a solution of NaH₂PO₄*H2O (8.247g dissolved in 50ml distilled water) under slow dropping of 1 ml/min followed by adjusting the pH to 9.5 with HCl and finally filter and wash until neutral pH. The as-obtained slurry was dried in the oven at 80°C and then calcined at 600°C in order to ensure a proper purification of the final product of organic phases, also inducing a proper sterilization of the mixture before insertion into the collagen matrix. The collagen/hydroxyapatite composite materials are tailored from morphological point of view to optimize their biocompatibility and bio-integration against mechanical properties whereas the addition of the dopants is aimed to improve the biological activity of the samples. The addition of transitional metals can improve the biocompatibility and especially the osteoblasts adhesion (Mn²⁺) or to induce slightly better osteoblast differentiation of the osteoblast, Zn²⁺ being a cofactor for many enzymes including those responsible for cell differentiation. If the amount is too high, the final material can become toxic and lose all of its biocompatibility. In order to achieve a good biocompatibility and not reach the cytotoxic effect, the amount of transitional metals added has to be maintained at low levels (0.5% molar). The amount of transitional metals entering into the elemental cell of HA will be verified using inductively-coupled plasma mass spectrometric system. This highly sensitive technique is necessary, because, at such low levels of transitional metals, the difference between biocompatible and cytotoxic is a very thin line, thus requiring proper and thorough investigation using a precise technique. In order to determine the structure and morphology of the obtained composite materials, IR spectroscopy, X-Ray diffraction (XRD), scanning electron microscopy (SEM), and Energy Dispersive X-Ray Spectrometry (EDS) were used. Acknowledgment: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project “Biomimetic porous structures obtained by 3D printing developed for bone tissue engineering (BIOGRAFTPRINT), No. 127PED/2017 is also highly acknowledged.

Keywords: collagen, composite materials, hydroxyapatite, bone tissue engineering

Procedia PDF Downloads 175
47 Improved Elastoplastic Bounding Surface Model for the Mathematical Modeling of Geomaterials

Authors: Andres Nieto-Leal, Victor N. Kaliakin, Tania P. Molina

Abstract:

The nature of most engineering materials is quite complex. It is, therefore, difficult to devise a general mathematical model that will cover all possible ranges and types of excitation and behavior of a given material. As a result, the development of mathematical models is based upon simplifying assumptions regarding material behavior. Such simplifications result in some material idealization; for example, one of the simplest material idealization is to assume that the material behavior obeys the elasticity. However, soils are nonhomogeneous, anisotropic, path-dependent materials that exhibit nonlinear stress-strain relationships, changes in volume under shear, dilatancy, as well as time-, rate- and temperature-dependent behavior. Over the years, many constitutive models, possessing different levels of sophistication, have been developed to simulate the behavior geomaterials, particularly cohesive soils. Early in the development of constitutive models, it became evident that elastic or standard elastoplastic formulations, employing purely isotropic hardening and predicated in the existence of a yield surface surrounding a purely elastic domain, were incapable of realistically simulating the behavior of geomaterials. Accordingly, more sophisticated constitutive models have been developed; for example, the bounding surface elastoplasticity. The essence of the bounding surface concept is the hypothesis that plastic deformations can occur for stress states either within or on the bounding surface. Thus, unlike classical yield surface elastoplasticity, the plastic states are not restricted only to those lying on a surface. Elastoplastic bounding surface models have been improved; however, there is still need to improve their capabilities in simulating the response of anisotropically consolidated cohesive soils, especially the response in extension tests. Thus, in this work an improved constitutive model that can more accurately predict diverse stress-strain phenomena exhibited by cohesive soils was developed. Particularly, an improved rotational hardening rule that better simulate the response of cohesive soils in extension. The generalized definition of the bounding surface model provides a convenient and elegant framework for unifying various previous versions of the model for anisotropically consolidated cohesive soils. The Generalized Bounding Surface Model for cohesive soils is a fully three-dimensional, time-dependent model that accounts for both inherent and stress induced anisotropy employing a non-associative flow rule. The model numerical implementation in a computer code followed an adaptive multistep integration scheme in conjunction with local iteration and radial return. The one-step trapezoidal rule was used to get the stiffness matrix that defines the relationship between the stress increment and the strain increment. After testing the model in simulating the response of cohesive soils through extensive comparisons of model simulations to experimental data, it has been shown to give quite good simulations. The new model successfully simulates the response of different cohesive soils; for example, Cardiff Kaolin, Spestone Kaolin, and Lower Cromer Till. The simulated undrained stress paths, stress-strain response, and excess pore pressures are in very good agreement with the experimental values, especially in extension.

Keywords: bounding surface elastoplasticity, cohesive soils, constitutive model, modeling of geomaterials

Procedia PDF Downloads 294
46 Smart Services for Easy and Retrofittable Machine Data Collection

Authors: Till Gramberg, Erwin Gross, Christoph Birenbaum

Abstract:

This paper presents the approach of the Easy2IoT research project. Easy2IoT aims to enable companies in the prefabrication sheet metal and sheet metal processing industry to enter the Industrial Internet of Things (IIoT) with a low-threshold and cost-effective approach. It focuses on the development of physical hardware and software to easily capture machine activities from on a sawing machine, benefiting various stakeholders in the SME value chain, including machine operators, tool manufacturers and service providers. The methodological approach of Easy2IoT includes an in-depth requirements analysis and customer interviews with stakeholders along the value chain. Based on these insights, actions, requirements and potential solutions for smart services are derived. The focus is on providing actionable recommendations, competencies and easy integration through no-/low-code applications to facilitate implementation and connectivity within production networks. At the core of the project is a novel, non-invasive measurement and analysis system that can be easily deployed and made IIoT-ready. This system collects machine data without interfering with the machines themselves. It does this by non-invasively measuring the tension on a sawing machine. The collected data is then connected and analyzed using artificial intelligence (AI) to provide smart services through a platform-based application. Three Smart Services are being developed within Easy2IoT to provide immediate benefits to users: Wear part and product material condition monitoring and predictive maintenance for sawing processes. The non-invasive measurement system enables the monitoring of tool wear, such as saw blades, and the quality of consumables and materials. Service providers and machine operators can use this data to optimize maintenance and reduce downtime and material waste. Optimize Overall Equipment Effectiveness (OEE) by monitoring machine activity. The non-invasive system tracks machining times, setup times and downtime to identify opportunities for OEE improvement and reduce unplanned machine downtime. Estimate CO2 emissions for connected machines. CO2 emissions are calculated for the entire life of the machine and for individual production steps based on captured power consumption data. This information supports energy management and product development decisions. The key to Easy2IoT is its modular and easy-to-use design. The non-invasive measurement system is universally applicable and does not require specialized knowledge to install. The platform application allows easy integration of various smart services and provides a self-service portal for activation and management. Innovative business models will also be developed to promote the sustainable use of the collected machine activity data. The project addresses the digitalization gap between large enterprises and SME. Easy2IoT provides SME with a concrete toolkit for IIoT adoption, facilitating the digital transformation of smaller companies, e.g. through retrofitting of existing machines.

Keywords: smart services, IIoT, IIoT-platform, industrie 4.0, big data

Procedia PDF Downloads 45
45 Anajaa-Visual Substitution System: A Navigation Assistive Device for the Visually Impaired

Authors: Juan Pablo Botero Torres, Alba Avila, Luis Felipe Giraldo

Abstract:

Independent navigation and mobility through unknown spaces pose a challenge for the autonomy of visually impaired people (VIP), who have relied on the use of traditional assistive tools like the white cane and trained dogs. However, emerging visually assistive technologies (VAT) have proposed several human-machine interfaces (HMIs) that could improve VIP’s ability for self-guidance. Hereby, we introduce the design and implementation of a visually assistive device, Anajaa – Visual Substitution System (AVSS). This system integrates ultrasonic sensors with custom electronics, and computer vision models (convolutional neural networks), in order to achieve a robust system that acquires information of the surrounding space and transmits it to the user in an intuitive and efficient manner. AVSS consists of two modules: the sensing and the actuation module, which are fitted to a chest mount and belt that communicate via Bluetooth. The sensing module was designed for the acquisition and processing of proximity signals provided by an array of ultrasonic sensors. The distribution of these within the chest mount allows an accurate representation of the surrounding space, discretized in three different levels of proximity, ranging from 0 to 6 meters. Additionally, this module is fitted with an RGB-D camera used to detect potentially threatening obstacles, like staircases, using a convolutional neural network specifically trained for this purpose. Posteriorly, the depth data is used to estimate the distance between the stairs and the user. The information gathered from this module is then sent to the actuation module that creates an HMI, by the means of a 3x2 array of vibration motors that make up the tactile display and allow the system to deliver haptic feedback. The actuation module uses vibrational messages (tactones); changing both in amplitude and frequency to deliver different awareness levels according to the proximity of the obstacle. This enables the system to deliver an intuitive interface. Both modules were tested under lab conditions, and the HMI was additionally tested with a focal group of VIP. The lab testing was conducted in order to establish the processing speed of the computer vision algorithms. This experimentation determined that the model can process 0.59 frames per second (FPS); this is considered as an adequate processing speed taking into account that the walking speed of VIP is 1.439 m/s. In order to test the HMI, we conducted a focal group composed of two females and two males between the ages of 35-65 years. The subject selection was aided by the Colombian Cooperative of Work and Services for the Sightless (COOTRASIN). We analyzed the learning process of the haptic messages throughout five experimentation sessions using two metrics: message discrimination and localization success. These correspond to the ability of the subjects to recognize different tactones and locate them within the tactile display. Both were calculated as the mean across all subjects. Results show that the focal group achieved message discrimination of 70% and a localization success of 80%, demonstrating how the proposed HMI leads to the appropriation and understanding of the feedback messages, enabling the user’s awareness of its surrounding space.

Keywords: computer vision on embedded systems, electronic trave aids, human-machine interface, haptic feedback, visual assistive technologies, vision substitution systems

Procedia PDF Downloads 53
44 Effectiveness of Simulation Resuscitation Training to Improve Self-Efficacy of Physicians and Nurses at Aga Khan University Hospital in Advanced Cardiac Life Support Courses Quasi-Experimental Study Design

Authors: Salima R. Rajwani, Tazeen Ali, Rubina Barolia, Yasmin Parpio, Nasreen Alwani, Salima B. Virani

Abstract:

Introduction: Nurses and physicians have a critical role in initiating lifesaving interventions during cardiac arrest. It is important that timely delivery of high quality Cardio Pulmonary Resuscitation (CPR) with advanced resuscitation skills and management of cardiac arrhythmias is a key dimension of code during cardiac arrest. It will decrease the chances of patient survival if the healthcare professionals are unable to initiate CPR timely. Moreover, traditional training will not prepare physicians and nurses at a competent level and their knowledge level declines over a period of time. In this regard, simulation training has been proven to be effective in promoting resuscitation skills. Simulation teaching learning strategy improves knowledge level, and skills performance during resuscitation through experiential learning without compromising patient safety in real clinical situations. The purpose of the study is to evaluate the effectiveness of simulation training in Advanced Cardiac Life Support Courses by using the selfefficacy tool. Methods: The study design is a quantitative research design and non-randomized quasi-experimental study design. The study examined the effectiveness of simulation through self-efficacy in two instructional methods; one is Medium Fidelity Simulation (MFS) and second is Traditional Training Method (TTM). The sample size was 220. Data was compiled by using the SPSS tool. The standardized simulation based training increases self-efficacy, knowledge, and skills and improves the management of patients in actual resuscitation. Results: 153 students participated in study; CG: n = 77 and EG: n = 77. The comparison was done between arms in pre and post-test. (F value was 1.69, p value is <0.195 and df was 1). There was no significant difference between arms in the pre and post-test. The interaction between arms was observed and there was no significant difference in interaction between arms in the pre and post-test. (F value was 0.298, p value is <0.586 and df is 1. However, the results showed self-efficacy scores were significantly higher within experimental group in post-test in advanced cardiac life support resuscitation courses as compared to Traditional Training Method (TTM) and had overall (p <0.0001) and F value was 143.316 (mean score was 45.01 and SD was 9.29) verses pre-test result showed (mean score was 31.15 and SD was 12.76) as compared to TTM in post-test (mean score was 29.68 and SD was 14.12) verses pre-test result showed (mean score was 42.33 and SD was 11.39). Conclusion: The standardized simulation-based training was conducted in the safe learning environment in Advanced Cardiac Life Suport Courses and physicians and nurses benefited from self-confidence, early identification of life-threatening scenarios, early initiation of CPR, and provides high-quality CPR, timely administration of medication and defibrillation, appropriate airway management, rhythm analysis and interpretation, and Return of Spontaneous Circulation (ROSC), team dynamics, debriefing, and teaching and learning strategies that will improve the patient survival in actual resuscitation.

Keywords: advanced cardiac life support, cardio pulmonary resuscitation, return of spontaneous circulation, simulation

Procedia PDF Downloads 53
43 Implementation of Real-World Learning Experiences in Teaching Courses of Medical Microbiology and Dietetics for Health Science Students

Authors: Miriam I. Jimenez-Perez, Mariana C. Orellana-Haro, Carolina Guzman-Brambila

Abstract:

As part of microbiology and dietetics courses, students of medicine and nutrition analyze the main pathogenic microorganisms and perform dietary analyzes. The course of microbiology describes in a general way the main pathogens including bacteria, viruses, fungi, and parasites, as well as their interaction with the human species. We hypothesize that lack of practical application of the course causes the students not to find the value and the clinical application of it when in reality it is a matter of great importance for healthcare in our country. The courses of the medical microbiology and dietetics are mostly theoretical and only a few hours of laboratory practices. Therefore, it is necessary the incorporation of new innovative techniques that involve more practices and community fieldwork, real cases analysis and real-life situations. The purpose of this intervention was to incorporate real-world learning experiences in the instruction of medical microbiology and dietetics courses, in order to improve the learning process, understanding and the application in the field. During a period of 6 months, medicine and nutrition students worked in a community of urban poverty. We worked with 90 children between 4 and 6 years of age from low-income families with no access to medical services, to give an infectious diagnosis related to nutritional status in these children. We expect that this intervention would give a different kind of context to medical microbiology and dietetics students improving their learning process, applying their knowledge and laboratory practices to help a needed community. First, students learned basic skills in microbiology diagnosis test during laboratory sessions. Once, students acquired abilities to make biochemical probes and handle biological samples, they went to the community and took stool samples from children (with the corresponding informed consent). Students processed the samples in the laboratory, searching for enteropathogenic microorganism with RapID™ ONE system (Thermo Scientific™) and parasites using Willis and Malloy modified technique. Finally, they compared the results with the nutritional status of the children, previously measured by anthropometric indicators. The anthropometric results were interpreted by the OMS Anthro software (WHO, 2011). The microbiological result was interpreted by ERIC® Electronic RapID™ Code Compendium software and validated by a physician. The results were analyses of infectious outcomes and nutritional status. Related to fieldwork community learning experiences, our students improved their knowledge in microbiology and were capable of applying this knowledge in a real-life situation. They found this kind of learning useful when they translate theory to a real-life situation. For most of our students, this is their first contact as health caregivers with real population, and this contact is very important to help them understand the reality of many people in Mexico. In conclusion, real-world or fieldwork learning experiences empower our students to have a real and better understanding of how they can apply their knowledge in microbiology and dietetics and help a much- needed population, this is the kind of reality that many people live in our country.

Keywords: real-world learning experiences, medical microbiology, dietetics, nutritional status, infectious status.

Procedia PDF Downloads 103
42 Worldwide GIS Based Earthquake Information System/Alarming System for Microzonation/Liquefaction and It’s Application for Infrastructure Development

Authors: Rajinder Kumar Gupta, Rajni Kant Agrawal, Jaganniwas

Abstract:

One of the most frightening phenomena of nature is the occurrence of earthquake as it has terrible and disastrous effects. Many earthquakes occur every day worldwide. There is need to have knowledge regarding the trends in earthquake occurrence worldwide. The recoding and interpretation of data obtained from the establishment of the worldwide system of seismological stations made this possible. From the analysis of recorded earthquake data, the earthquake parameters and source parameters can be computed and the earthquake catalogues can be prepared. These catalogues provide information on origin, time, epicenter locations (in term of latitude and longitudes) focal depths, magnitude and other related details of the recorded earthquakes. Theses catalogues are used for seismic hazard estimation. Manual interpretation and analysis of these data is tedious and time consuming. A geographical information system is a computer based system designed to store, analyzes and display geographic information. The implementation of integrated GIS technology provides an approach which permits rapid evaluation of complex inventor database under a variety of earthquake scenario and allows the user to interactively view results almost immediately. GIS technology provides a powerful tool for displaying outputs and permit to users to see graphical distribution of impacts of different earthquake scenarios and assumptions. An endeavor has been made in present study to compile the earthquake data for the whole world in visual Basic on ARC GIS Plate form so that it can be used easily for further analysis to be carried out by earthquake engineers. The basic data on time of occurrence, location and size of earthquake has been compiled for further querying based on various parameters. A preliminary analysis tool is also provided in the user interface to interpret the earthquake recurrence in region. The user interface also includes the seismic hazard information already worked out under GHSAP program. The seismic hazard in terms of probability of exceedance in definite return periods is provided for the world. The seismic zones of the Indian region are included in the user interface from IS 1893-2002 code on earthquake resistant design of buildings. The City wise satellite images has been inserted in Map and based on actual data the following information could be extracted in real time: • Analysis of soil parameters and its effect • Microzonation information • Seismic hazard and strong ground motion • Soil liquefaction and its effect in surrounding area • Impacts of liquefaction on buildings and infrastructure • Occurrence of earthquake in future and effect on existing soil • Propagation of earth vibration due of occurrence of Earthquake GIS based earthquake information system has been prepared for whole world in Visual Basic on ARC GIS Plate form and further extended micro level based on actual soil parameters. Individual tools has been developed for liquefaction, earthquake frequency etc. All information could be used for development of infrastructure i.e. multi story structure, Irrigation Dam & Its components, Hydro-power etc in real time for present and future.

Keywords: GIS based earthquake information system, microzonation, analysis and real time information about liquefaction, infrastructure development

Procedia PDF Downloads 294
41 Drivers of Global Great Power Assertiveness: Russia and Its Involvement in the Global South

Authors: Elina Vroblevska, Toms Ratfelders

Abstract:

This paper examines the impact of international status-seeking aspirations on great power behavior within the international system. In particular, we seek to test the assumption advanced by the proponents of the Social Identity Theory (SIT) that the inability to achieve social mobilization through joining perceived higher-status social groups (of states) leads great powers to adopt the approach of social competition in which they aim to equal or outdo the dominant group in the area on which its claim to superior status rests. Since the dissolution of the Soviet Union, Russia has struggled to be accepted as a great power by the group of Western states that had created the dominant international system order, while the Soviet states were isolated. While the 1990s and the beginning of the 21st century can be characterized by striving to integrate into the existing order, the second decade has seen a rather sharp turn towards creating a new power center for Russia through the realization of ideas of multipolarity rivalry and uniqueness of the state itself. Increasingly, we have seen the Kremlin striving to collaborate and mobilize groups of states that fall outside of the categories of democracy, multiculturalism, and international order, the way that is perceived by the dominant group, which can be described as the West. Instead, Russia builds its own narrative where it creates an alternative understanding of these values, differentiating from the higher-status social group. The Global South, from a Russian perspective, is the group of states that can still be swayed to create an alternative power center in the international system - one where Russia can assert its status as a great power. This is based on a number of reasons, the most important being that the global north is already highly institutionalized in terms of economy (the EU) and defense (NATO), leaving no room for Russia but to integrate within the existing framework. Second, the difference in values and their interpretation - Russia has been adamant, for the last twenty years, on basing its moral code on traditional values like religion, the heterosexual family model, and moral superiority, which contradict the overall secularism of the Global North. And last, the striking difference in understanding of state governance models - with Russia becoming more autocratic over the course of the last 20 years, it has deliberately created distance between itself and democratic states, entering a “gray area” of alternative understanding of democracy which is more relatable to the global South countries. Using computational text analysis of the excerpts of Vladimir Putin’s speeches delivered from 2000-2022 regarding the areas that fall outside the immediate area of interest of Russia (the Global South), we identify 80 topics that relate to the particular component of the great power status - interest to use force globally. These topics are compared across four temporal frames that capture the periods of more and less permissible Western social boundaries. We find that there exists a negative association between such permissiveness and Putin’s emphasis on the “use of force” topics. This lends further support to the Social Identity Theory and contributes to broadening its applicability to explaining the questions related to great power assertiveness in areas outside of their primary focus regions.

Keywords: Russia, Global South, great power, identity

Procedia PDF Downloads 30
40 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System

Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim

Abstract:

General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.

Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms

Procedia PDF Downloads 364
39 Fold and Thrust Belts Seismic Imaging and Interpretation

Authors: Sunjay

Abstract:

Plate tectonics is of very great significance as it represents the spatial relationships of volcanic rock suites at plate margins, the distribution in space and time of the conditions of different metamorphic facies, the scheme of deformation in mountain belts, or orogens, and the association of different types of economic deposit. Orogenic belts are characterized by extensive thrust faulting, movements along large strike-slip fault zones, and extensional deformation that occur deep within continental interiors. Within oceanic areas there also are regions of crustal extension and accretion in the backarc basins that are located on the landward sides of many destructive plate margins.Collisional orogens develop where a continent or island arc collides with a continental margin as a result of subduction. collisional and noncollisional orogens can be explained by differences in the strength and rheology of the continental lithosphere and by processes that influence these properties during orogenesis.Seismic Imaging Difficulties-In triangle zones, several factors reduce the effectiveness of seismic methods. The topography in the central part of the triangle zone is usually rugged and is associated with near-surface velocity inversions which degrade the quality of the seismic image. These characteristics lead to low signal-to-noise ratio, inadequate penetration of energy through overburden, poor geophone coupling with the surface and wave scattering. Depth Seismic Imaging Techniques-Seismic processing relates to the process of altering the seismic data to suppress noise, enhancing the desired signal (higher signal-to-noise ratio) and migrating seismic events to their appropriate location in space and depth. Processing steps generally include analysis of velocities, static corrections, moveout corrections, stacking and migration. Exploration seismology Bow-tie effect -Shadow Zones-areas with no reflections (dead areas). These are called shadow zones and are common in the vicinity of faults and other discontinuous areas in the subsurface. Shadow zones result when energy from a reflector is focused on receivers that produce other traces. As a result, reflectors are not shown in their true positions. Subsurface Discontinuities-Diffractions occur at discontinuities in the subsurface such as faults and velocity discontinuities (as at “bright spot” terminations). Bow-tie effect caused by the two deep-seated synclines. Seismic imaging of thrust faults and structural damage-deepwater thrust belts, Imaging deformation in submarine thrust belts using seismic attributes,Imaging thrust and fault zones using 3D seismic image processing techniques, Balanced structural cross sections seismic interpretation pitfalls checking, The seismic pitfalls can originate due to any or all of the limitations of data acquisition, processing, interpretation of the subsurface geology,Pitfalls and limitations in seismic attribute interpretation of tectonic features, Seismic attributes are routinely used to accelerate and quantify the interpretation of tectonic features in 3D seismic data. Coherence (or variance) cubes delineate the edges of megablocks and faulted strata, curvature delineates folds and flexures, while spectral components delineate lateral changes in thickness and lithology. Carbon capture and geological storage leakage surveillance because fault behave as a seal or a conduit for hydrocarbon transportation to a trap,etc.

Keywords: tectonics, seismic imaging, fold and thrust belts, seismic interpretation

Procedia PDF Downloads 43
38 Health and Climate Changes: "Ippocrate" a New Alert System to Monitor and Identify High Risk

Authors: A. Calabrese, V. F. Uricchio, D. di Noia, S. Favale, C. Caiati, G. P. Maggi, G. Donvito, D. Diacono, S. Tangaro, A. Italiano, E. Riezzo, M. Zippitelli, M. Toriello, E. Celiberti, D. Festa, A. Colaianni

Abstract:

Climate change has a severe impact on human health. There is a vast literature demonstrating temperature increase is causally related to cardiovascular problem and represents a high risk for human health, but there are not study that improve a solution. In this work, it is studied how the clime influenced the human parameter through the analysis of climatic conditions in an area of the Apulia Region: Capurso Municipality. At the same time, medical personnel involved identified a set of variables useful to define an index describing health condition. These scientific studies are the base of an innovative alert system, IPPOCRATE, whose aim is to asses climate risk and share information to population at risk to support prevention and mitigation actions. IPPOCRATE is an e-health system, it is designed to provide technological support to analysis of health risk related to climate and provide tools for prevention and management of critical events. It is the first integrated system of prevention of human risk caused by climate change. IPPOCRATE calculates risk weighting meteorological data with the vulnerability of monitored subjects and uses mobile and cloud technologies to acquire and share information on different data channels. It is composed of four components: Multichannel Hub. Multichannel Hub is the ICT infrastructure used to feed IPPOCRATE cloud with a different type of data coming from remote monitoring devices, or imported from meteorological databases. Such data are ingested, transformed and elaborated in order to be dispatched towards mobile app and VoIP phone systems. IPPOCRATE Multichannel Hub uses open communication protocols to create a set of APIs useful to interface IPPOCRATE with 3rd party applications. Internally, it uses non-relational paradigm to create flexible and highly scalable database. WeHeart and Smart Application The wearable device WeHeart is equipped with sensors designed to measure following biometric variables: heart rate, systolic blood pressure and diastolic blood pressure, blood oxygen saturation, body temperature and blood glucose for diabetic subjects. WeHeart is designed to be easy of use and non-invasive. For data acquisition, users need only to wear it and connect it to Smart Application by Bluetooth protocol. Easy Box was designed to take advantage from new technologies related to e-health care. EasyBox allows user to fully exploit all IPPOCRATE features. Its name, Easy Box, reveals its purpose of container for various devices that may be included depending on user needs. Territorial Registry is the IPPOCRATE web module reserved to medical personnel for monitoring, research and analysis activities. Territorial Registry allows to access to all information gathered by IPPOCRATE using GIS system in order to execute spatial analysis combining geographical data (climatological information and monitored data) with information regarding the clinical history of users and their personal details. Territorial Registry was designed for different type of users: control rooms managed by wide area health facilities, single health care center or single doctor. Territorial registry manages such hierarchy diversifying the access to system functionalities. IPPOCRATE is the first e-Health system focused on climate risk prevention.

Keywords: climate change, health risk, new technological system

Procedia PDF Downloads 838
37 Primary and Secondary Big Bangs Theory of Creation of Universe

Authors: Shyam Sunder Gupta

Abstract:

For creation of Universe, theory of Big Bang , from Singularity is most acceptable theory, but has limitations as it does not answer ; how Singularity gets created and what causes Big Bang ?Further , Universe is composed of 95% Dark Energy and Dark Matter and balance 5% is visible part of Universe , but no explanation . Recently, it has been reported that there could be very large number of Universes, but only , a stipulation. This research which is based on Bhagvat Puran, a Vedic Scripture answers all questions. There is a Unique Energy Field which is eternal and infinite. The carrier Particles of Unique Energy are Paramanus; God Particles. Paramanus are Fundamental Particles and combination of these particles create bigger particles from which Universe gets created. For creation to initiate, Unique Energy gets represented in three phases; Positive Male Energy, Neutral Energy(creates Eternal Time)and Negative Female Energy. Positive Male Energy further expands in three forms of Creative Energies (CE1,CE2andCE3)and 16 principles get created, namely, Energy of Activation , Energy of Action, Energy of Darkness, Pradhan ( Equilibrium state of three energies ) , Prakriti(Non-equilibrium state of three energies, creating modes of Activation, Action and Darkness),Mahat-tattva ( consists of three modes , dominant in Mode of Darkness), Time, Energy of Consciousness, Ego Energy(consists of three modes , very strongly dominated by Mode of Darkness),Energy of Intellect, Mind Energy , Sky( creates Space and Sound Energy),Air(creates gaseous substances), Fire( creates different forms of energies like thermal, light, electrical etc.), Water( creates liquid substances)and Earth(creates solid substances). CE1 Energy creates Infinite number of Singularities from seven principles, Pradhan , Mahat-tattva, Sky , Air, Fire, Water and Earth . CE1 Energy gets divided as CE2 and enters along with other 9 principles , in each of Singularity and Primary Big Bang takes and infinite number of Universes get created. Each Universe has seven coverings of 7 principles and each layer is 10 times thicker than previous layer. By Energy CE2 , space in Universe under the coverings is divided in two parts , upper part and lower part. Upper part is occupied by Dark Energy which is created from Mode of Darkness in Ego Energy which keeps getting converted in Dark Matter and forms Invisible part of Universe. In the lower part , process of evolution gets initiated and seeds of 24 elements , Consciousness , Ego, Intellect, Mind, 5 Fundamental Elements( space, Air, Fire, Water Earth, which create non-living matter ),5 senses which receive inputs( eyes, nose, ears, tongue , skin), 5 Working Senses (Smell, Taste, Sight, Touch and Hearing);5 elements of Action( Organs of procreation , excretion, locomotion , speech and acquisition ), get created . In EC2 Energy, Singularity gets created which gets exploded by force of Energy of Action ,and Secondary Big Bang takes place and Visible Universe gets created in the shape of Bud of Flower Lotus . Within the Visible part of Universe, a small part gets created , Phenomenal Universe. Diameter of Sun and planetary system ,at the time of formation ,is 6.4 billion km, which is close to reported value . There are 5 different orbits , with reference to our Solar System. Moon around earth takes one month,, earth around sun one year, sun around Milk way one cosmic year(322.58 million years), Milky way around Universe 4.32 billion years and universe around center of universe 311.04 trillion years. Universe creation is a cyclic process with cycle time of 622.08 trillion years.In summary, Universe consists of 4 parts; covering of 7 layers, Dark Energy and Dark Matter, Visible and Phenomenal universe.

Keywords: big bang, creation, dark energy, dark matter, singularity, universe

Procedia PDF Downloads 55
36 Exposing The Invisible

Authors: Kimberley Adamek

Abstract:

According to the Council on Tall Buildings, there has been a rapid increase in the construction of tall or “megatall” buildings over the past two decades. Simultaneously, the New England Journal of Medicine has reported that there has been a steady increase in climate related natural disasters since the 1970s; the eastern expansion of the USA's infamous Tornado Alley being just one of many current issues. In the future, this could mean that tall buildings, which already guide high speed winds down to pedestrian levels would have to withstand stronger forces and protect pedestrians in more extreme ways. Although many projects are required to be verified within wind tunnels and a handful of cities such as San Francisco have included wind testing within building code standards, there are still many examples where wind is only considered for basic loading. This typically results in and an increase of structural expense and unwanted mitigation strategies that are proposed late within a project. When building cities, architects rarely consider how each building alters the invisible patterns of wind and how these alterations effect other areas in different ways later on. It is not until these forces move, overpower and even destroy cities that people take notice. For example, towers have caused winds to blow objects into people (Walkie-Talkie Tower, Leeds, England), cause building parts to vibrate and produce loud humming noises (Beetham Tower, Manchester), caused wind tunnels in streets as well as many other issues. Alternatively, there exist towers which have used their form to naturally draw in air and ventilate entire facilities in order to eliminate the needs for costly HVAC systems (The Met, Thailand) and used their form to increase wind speeds to generate electricity (Bahrain Tower, Dubai). Wind and weather exist and effect all parts of the world in ways such as: Science, health, war, infrastructure, catastrophes, tourism, shopping, media and materials. Working in partnership with a leading wind engineering company RWDI, a series of tests, images and animations documenting discovered interactions of different building forms with wind will be collected to emphasize the possibilities for wind use to architects. A site within San Francisco (due to its increasing tower development, consistently wind conditions and existing strict wind comfort criteria) will host a final design. Iterations of this design will be tested within the wind tunnel and computational fluid dynamic systems which will expose, utilize and manipulate wind flows to create new forms, technologies and experiences. Ultimately, this thesis aims to question the amount which the environment is allowed to permeate building enclosures, uncover new programmatic possibilities for wind in buildings, and push the boundaries of working with the wind to ensure the development and safety of future cities. This investigation will improve and expand upon the traditional understanding of wind in order to give architects, wind engineers as well as the general public the ability to broaden their scope in order to productively utilize this living phenomenon that everyone constantly feels but cannot see.

Keywords: wind engineering, climate, visualization, architectural aerodynamics

Procedia PDF Downloads 339
35 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 111
34 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example

Authors: Alena Nesterenko, Svetlana Petrikova

Abstract:

Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.

Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation

Procedia PDF Downloads 188
33 Study on Aerosol Behavior in Piping Assembly under Varying Flow Conditions

Authors: Anubhav Kumar Dwivedi, Arshad Khan, S. N. Tripathi, Manish Joshi, Gaurav Mishra, Dinesh Nath, Naveen Tiwari, B. K. Sapra

Abstract:

In a nuclear reactor accident scenario, a large number of fission products may release to the piping system of the primary heat transport. The released fission products, mostly in the form of the aerosol, get deposited on the inner surface of the piping system mainly due to gravitational settling and thermophoretic deposition. The removal processes in the complex piping system are controlled to a large extent by the thermal-hydraulic conditions like temperature, pressure, and flow rates. These parameters generally vary with time and therefore must be carefully monitored to predict the aerosol behavior in the piping system. The removal process of aerosol depends on the size of particles that determines how many particles get deposit or travel across the bends and reach to the other end of the piping system. The released aerosol gets deposited onto the inner surface of the piping system by various mechanisms like gravitational settling, Brownian diffusion, thermophoretic deposition, and by other deposition mechanisms. To quantify the correct estimate of deposition, the identification and understanding of the aforementioned deposition mechanisms are of great importance. These mechanisms are significantly affected by different flow and thermodynamic conditions. Thermophoresis also plays a significant role in particle deposition. In the present study, a series of experiments were performed in the piping system of the National Aerosol Test Facility (NATF), BARC using metal aerosols (zinc) in dry environments to study the spatial distribution of particles mass and number concentration, and their depletion due to various removal mechanisms in the piping system. The experiments were performed at two different carrier gas flow rates. The commercial CFD software FLUENT is used to determine the distribution of temperature, velocity, pressure, and turbulence quantities in the piping system. In addition to the in-built models for turbulence, heat transfer and flow in the commercial CFD code (FLUENT), a new sub-model PBM (population balance model) is used to describe the coagulation process and to compute the number concentration along with the size distribution at different sections of the piping. In the sub-model coagulation kernels are incorporated through user-defined function (UDF). The experimental results are compared with the CFD modeled results. It is found that most of the Zn particles (more than 35 %) deposit near the inlet of the plenum chamber and a low deposition is obtained in piping sections. The MMAD decreases along the length of the test assembly, which shows that large particles get deposited or removed in the course of flow, and only fine particles travel to the end of the piping system. The effect of a bend is also observed, and it is found that the relative loss in mass concentration at bends is more in case of a high flow rate. The simulation results show that the thermophoresis and depositional effects are more dominating for the small and larger sizes as compared to the intermediate particles size. Both SEM and XRD analysis of the collected samples show the samples are highly agglomerated non-spherical and composed mainly of ZnO. The coupled model framed in this work could be used as an important tool for predicting size distribution and concentration of some other aerosol released during a reactor accident scenario.

Keywords: aerosol, CFD, deposition, coagulation

Procedia PDF Downloads 121
32 A Multi-Scale Approach to Space Use: Habitat Disturbance Alters Behavior, Movement and Energy Budgets in Sloths (Bradypus variegatus)

Authors: Heather E. Ewart, Keith Jensen, Rebecca N. Cliffe

Abstract:

Fragmentation and changes in the structural composition of tropical forests – as a result of intensifying anthropogenic disturbance – are increasing pressures on local biodiversity. Species with low dispersal abilities have some of the highest extinction risks in response to environmental change, as even small-scale environmental variation can substantially impact their space use and energetic balance. Understanding the implications of forest disturbance is therefore essential, ultimately allowing for more effective and targeted conservation initiatives. Here, the impact of different levels of forest disturbance on the space use, energetics, movement and behavior of 18 brown-throated sloths (Bradypus variegatus) were assessed in the South Caribbean of Costa Rica. A multi-scale framework was used to measure forest disturbance, including large-scale (landscape-level classifications) and fine-scale (within and surrounding individual home ranges) forest composition. Three landscape-level classifications were identified: primary forests (undisturbed), secondary forests (some disturbance, regenerating) and urban forests (high levels of disturbance and fragmentation). Finer-scale forest composition was determined using measurements of habitat structure and quality within and surrounding individual home ranges for each sloth (home range estimates were calculated using autocorrelated kernel density estimation [AKDE]). Measurements of forest quality included tree connectivity, density, diameter and height, species richness, and percentage of canopy cover. To determine space use, energetics, movement and behavior, six sloths in urban forests, seven sloths in secondary forests and five sloths in primary forests were tracked using a combination of Very High Frequency (VHF) radio transmitters and Global Positioning System (GPS) technology over an average period of 120 days. All sloths were also fitted with micro data-loggers (containing tri-axial accelerometers and pressure loggers) for an average of 30 days to allow for behavior-specific movement analyses (data analysis ongoing for data-loggers and primary forest sloths). Data-loggers included determination of activity budgets, circadian rhythms of activity and energy expenditure (using the vector of the dynamic body acceleration [VeDBA] as a proxy). Analyses to date indicate that home range size significantly increased with the level of forest disturbance. Female sloths inhabiting secondary forests averaged 0.67-hectare home ranges, while female sloths inhabiting urban forests averaged 1.93-hectare home ranges (estimates are represented by median values to account for the individual variation in home range size in sloths). Likewise, home range estimates for male sloths were 2.35 hectares in secondary forests and 4.83 in urban forests. Sloths in urban forests also used nearly double (median = 22.5) the number of trees as sloths in the secondary forest (median = 12). These preliminary data indicate that forest disturbance likely heightens the energetic requirements of sloths, a species already critically limited by low dispersal ability and rates of energy acquisition. Energetic and behavioral analyses from the data-loggers will be considered in the context of fine-scale forest composition measurements (i.e., habitat quality and structure) and are expected to reflect the observed home range and movement constraints. The implications of these results are far-reaching, presenting an opportunity to define a critical index of habitat connectivity for low dispersal species such as sloths.

Keywords: biodiversity conservation, forest disturbance, movement ecology, sloths

Procedia PDF Downloads 78
31 Forming Form, Motivation and Their Biolinguistic Hypothesis: The Case of Consonant Iconicity in Tashelhiyt Amazigh and English

Authors: Noury Bakrim

Abstract:

When dealing with motivation/arbitrariness, forming form (Forma Formans) and morphodynamics are to be grasped as relevant implications of enunciation/enactment, schematization within the specificity of language as sound/meaning articulation. Thus, the fact that a language is a form does not contradict stasis/dynamic enunciation (reflexivity vs double articulation). Moreover, some languages exemplify the role of the forming form, uttering, and schematization (roots in Semitic languages, the Chinese case). Beyond the evolutionary biosemiotic process (form/substance bifurcation, the split between realization/representation), non-isomorphism/asymmetry between linguistic form/norm and linguistic realization (phonetics for instance) opens up a new horizon problematizing the role of Brain – sensorimotor contribution in the continuous forming form. Therefore, we hypothesize biotization as both process/trace co-constructing motivation/forming form. Henceforth, referring to our findings concerning distribution and motivation patterns within Berber written texts (pulse based obstruents and nasal-lateral levels in poetry) and oral storytelling (consonant intensity clustering in quantitative and semantic/prosodic motivation), we understand consonant clustering, motivation and schematization as a complex phenomenon partaking in patterns of oral/written iconic prosody and reflexive metalinguistic representation opening the stable form. We focus our inquiry on both Amazigh and English clusters (/spl/, /spr/) and iconic consonant iteration in [gnunnuy] (to roll/tumble), [smummuy] (to moan sadly or crankily). For instance, the syllabic structures of /splaeʃ/ and /splaet/ imply an anamorphic representation of the state of the world: splash, impact on aquatic surfaces/splat impact on the ground. The pair has stridency and distribution as distinctive features which specify its phonetic realization (and a part of its meaning) /ʃ/ is [+ strident] and /t/ is [+ distributed] on the vocal tract. Schematization is then a process relating both physiology/code as an arthron vocal/bodily, vocal/practical shaping of the motor-articulatory system, leading to syntactic/semantic thematization (agent/patient roles in /spl/, /sm/ and other clusters or the tense uvular /qq/ at the initial position in Berber). Furthermore, the productivity of serial syllable sequencing in Berber points out different expressivity forms. We postulate two Components of motivated formalization: i) the process of memory paradigmatization relating to sequence modeling under sensorimotor/verbal specific categories (production/perception), ii) the process of phonotactic selection - prosodic unconscious/subconscious distribution by virtue of iconicity. Basing on multiple tests including a questionnaire, phonotactic/visual recognition and oral/written reproduction, we aim at patterning/conceptualizing consonant schematization and motivation among EFL and Amazigh (Berber) learners and speakers integrating biolinguistic hypotheses.

Keywords: consonant motivation and prosody, language and order of life, anamorphic representation, represented representation, biotization, sensori-motor and brain representation, form, formalization and schematization

Procedia PDF Downloads 113
30 Introducing, Testing, and Evaluating a Unified JavaScript Framework for Professional Online Studies

Authors: Caspar Goeke, Holger Finger, Dorena Diekamp, Peter König

Abstract:

Online-based research has recently gained increasing attention from various fields of research in the cognitive sciences. Technological advances in the form of online crowdsourcing (Amazon Mechanical Turk), open data repositories (Open Science Framework), and online analysis (Ipython notebook) offer rich possibilities to improve, validate, and speed up research. However, until today there is no cross-platform integration of these subsystems. Furthermore, implementation of online studies still suffers from the complex implementation (server infrastructure, database programming, security considerations etc.). Here we propose and test a new JavaScript framework that enables researchers to conduct any kind of behavioral research in the browser without the need to program a single line of code. In particular our framework offers the possibility to manipulate and combine the experimental stimuli via a graphical editor, directly in the browser. Moreover, we included an action-event system that can be used to handle user interactions, interactively change stimuli properties or store participants’ responses. Besides traditional recordings such as reaction time, mouse and keyboard presses, the tool offers webcam based eye and face-tracking. On top of these features our framework also takes care about the participant recruitment, via crowdsourcing platforms such as Amazon Mechanical Turk. Furthermore, the build in functionality of google translate will ensure automatic text translations of the experimental content. Thereby, thousands of participants from different cultures and nationalities can be recruited literally within hours. Finally, the recorded data can be visualized and cleaned online, and then exported into the desired formats (csv, xls, sav, mat) for statistical analysis. Alternatively, the data can also be analyzed online within our framework using the integrated Ipython notebook. The framework was designed such that studies can be used interchangeably between researchers. This will support not only the idea of open data repositories but also constitutes the possibility to share and reuse the experimental designs and analyses such that the validity of the paradigms will be improved. Particularly, sharing and integrating the experimental designs and analysis will lead to an increased consistency of experimental paradigms. To demonstrate the functionality of the framework we present the results of a pilot study in the field of spatial navigation that was conducted using the framework. Specifically, we recruited over 2000 subjects with various cultural backgrounds and consequently analyzed performance difference in dependence on the factors culture, gender and age. Overall, our results demonstrate a strong influence of cultural factors in spatial cognition. Such an influence has not yet been reported before and would not have been possible to show without the massive amount of data collected via our framework. In fact, these findings shed new lights on cultural differences in spatial navigation. As a consequence we conclude that our new framework constitutes a wide range of advantages for online research and a methodological innovation, by which new insights can be revealed on the basis of massive data collection.

Keywords: cultural differences, crowdsourcing, JavaScript framework, methodological innovation, online data collection, online study, spatial cognition

Procedia PDF Downloads 233
29 The Shadowy History of Berlin Underground: 1939-45/Der Schattenmann: Tagebuchaufzeichnungen 1938-1945

Authors: Christine Wiesenthal

Abstract:

This paper asks how to read a particularly vexed and complicated life writing text. For over half a century, the wartime journals of Ruth Andreas Friedrich (1901-1977) circulated as among a handful of more or less authoritative and “authentic” first-hand accounts of German resistance under Hitler. A professional journalist, Andreas Friedrich is remembered today largely through her publications at the war’s end, which appeared in English as Berlin Underground (published by Henry Holt in 1947), just before their publication in Germany as Der Schattenmann “The Shadow Man” (also in 1947). A British edition by the now obscure Latimer House Limited (London) followed in 1948; it is based closely on but is not identical to, the Henry Holt American edition, which in turn differs significantly from its German counterpart. Both Berlin Underground and Der Schattenmann figure Andreas-Friedrich as a key figure in an anti-fascist cell that operated in Berlin under the code name “Uncle Emil,” and provide a riveting account of political terror, opportunism, and dissent under the Nazi regime. Recent scholars have, however, begun to raise fascinating and controversial questions about Andreas-Friedrich’s own writing/reconstruction process in compiling the journals and about her highly selective curatorial role and claims. The apparent absence of any surviving original manuscript for Andreas-Friedrich’s journals amplifies the questions around them. Crucially, so too does the role of the translator of the English editions of Berlin Underground, the enigmatic June Barrows Mussey, a subject that has thus far gone virtually unnoticed and which this paper will focus on. Mussey, a prolific American translator, simultaneously cultivated a career as a professional magician, publishing a number of books on that subject under the alias Henry Hay. While the record indicates that Mussey attempted to compartmentalize his professional life, research into the publishing and translation history of Berlin Underground suggests that the two roles converge in the fact of the translator’s invisibility, by effacing the traces of his own hand and leaving unmarked his own significant textual interventions, Mussey, in effect, edited, abridged, and altered Andreas Friedrich’s journals for the second time. In fact, it could be said that while the fictitious “Uncle Emil” is positioned as “the shadow man” of the German edition, Mussey himself also emerges as precisely that in the English rendering of the journals. The implications of Mussey’s translation of Andreas Friedrich’s journals are one of the most important un-examined gaps in the shadowy publishing history of Berlin Underground, a history full of “tricks” (Mussey’s words) and illusions of transparency. Based largely on archival research of unpublished materials and methods of close reading and comparative analysis, this study will seek to convey some preliminary insights and exploratory work and frame questions toward what is ultimately envisioned as an experimental project in poetic historiography. As this work is still in the early stages, it would be especially welcome to have the opportunity provided by this conference to connect with a community of life writing colleagues who might help think through some of the challenges and possibilities that lie ahead.

Keywords: women’s wartime diaries, translation studies, auto/biographical theory, politics of life writing

Procedia PDF Downloads 31
28 Dynamic High-Rise Moment Resisting Frame Dissipation Performances Adopting Glazed Curtain Walls with Superelastic Shape Memory Alloy Joints

Authors: Lorenzo Casagrande, Antonio Bonati, Ferdinando Auricchio, Antonio Occhiuzzi

Abstract:

This paper summarizes the results of a survey on smart non-structural element dynamic dissipation when installed in modern high-rise mega-frame prototypes. An innovative glazed curtain wall was designed using Shape Memory Alloy (SMA) joints in order to increase the energy dissipation and enhance the seismic/wind response of the structures. The studied buildings consisted of thirty- and sixty-storey planar frames, extracted from reference three-dimensional steel Moment Resisting Frame (MRF) with outriggers and belt trusses. The internal core was composed of a CBF system, whilst outriggers were placed every fifteen stories to limit second order effects and inter-storey drifts. These structural systems were designed in accordance with European rules and numerical FE models were developed with an open-source code, able to account for geometric and material nonlinearities. With regard to the characterization of non-structural building components, full-scale crescendo tests were performed on aluminium/glass curtain wall units at the laboratory of the Construction Technologies Institute (ITC) of the Italian National Research Council (CNR), deriving force-displacement curves. Three-dimensional brick-based inelastic FE models were calibrated according to experimental results, simulating the fac¸ade response. Since recent seismic events and extreme dynamic wind loads have generated the large occurrence of non-structural components failure, which causes sensitive economic losses and represents a hazard for pedestrians safety, a more dissipative glazed curtain wall was studied. Taking advantage of the mechanical properties of SMA, advanced smart joints were designed with the aim to enhance both the dynamic performance of the single non-structural unit and the global behavior. Thus, three-dimensional brick-based plastic FE models were produced, based on the innovated non-structural system, simulating the evolution of mechanical degradation in aluminium-to-glass and SMA-to-glass connections when high deformations occurred. Consequently, equivalent nonlinear links were calibrated to reproduce the behavior of both tested and smart designed units, and implemented on the thirty- and sixty-storey structural planar frame FE models. Nonlinear time history analyses (NLTHAs) were performed to quantify the potential of the new system, when considered in the lateral resisting frame system (LRFS) of modern high-rise MRFs. Sensitivity to the structure height was explored comparing the responses of the two prototypes. Trends in global and local performance were discussed to show that, if accurately designed, advanced materials in non-structural elements provide new sources of energy dissipation.

Keywords: advanced technologies, glazed curtain walls, non-structural elements, seismic-action reduction, shape memory alloy

Procedia PDF Downloads 304
27 Advancing Dialysis Care Access And Health Information Management: A Blueprint For Nairobi Hospital

Authors: Kimberly Winnie Achieng Otieno

Abstract:

The Nairobi Hospital plays a pivotal role in healthcare provision in East and Central Africa, yet it faces challenges in providing accessible dialysis care. This paper explores strategic interventions to enhance dialysis care, improve access and streamline health information management, with an aim of fostering an integrated and patient-centered healthcare system in our region. Challenges at The Nairobi Hospital The Nairobi Hospital currently grapples with insufficient dialysis machines which results in extended turn around times. This issue stems from both staffing bottle necks and infrastructural limitations given our growing demand for renal care services. Our Paper-based record keeping system and fragmented flow of information downstream hinders the hospital’s ability to manage health data effectively. There is also a need for investment in expanding The Nairobi Hospital dialysis facilities to far reaching communities. Setting up satellite clinics that are closer to people who live in areas far from the main hospital will ensure better access to underserved areas. Community Outreach and Education Implementing education programs on kidney health within local communities is vital for early detection and prevention. Collaborating with local leaders and organizations can establish a proactive approach to renal health hence reducing the demand for acute dialysis interventions. We can amplify this effort by expanding The Nairobi Hospital’s corporate social responsibility outreach program with weekend engagement activities such as walks, awareness classes and fund drives. Enhancing Efficiency in Dialysis Care Demand for dialysis services continues to rise due to an aging Kenyan population and the increasing prevalence of chronic kidney disease (CKD). Present at this years International Nursing Conference are a diverse group of caregivers from around the world who can share with us their process optimization strategies, patient engagement techniques and resource utilization efficiencies to catapult The Nairobi Hospital to the 21st century and beyond. Plans are underway to offer ongoing education opportunities to keep staff updated on best practices and emerging technologies in addition to utilizing a patient feedback mechanisms to identify areas for improvement and enhance satisfaction. Staff empowerment and suggestion boxes address The Nairobi Hospital’s organizational challenges. Current financial constraints may limit a leapfrog in technology integration such as the acquisition of new dialysis machines and an investment in predictive analytics to forecast patient needs and optimize resource allocation. Streamlining Health Information Management Fully embracing a shift to 100% Electronic Health Records (EHRs) is a transformative step toward efficient health information management. Shared information promotes a holistic understanding of patients’ medical history, minimizing redundancies and enhancing overall care quality. To manage the transition to community-based care and EHRs effectively, a phased implementation approach is recommended. Conclusion By strategically enhancing dialysis care access and streamlining health information management, The Nairobi Hospital can strengthen its position as a leading healthcare institution in both East and Central Africa. This comprehensive approach aligns with the hospital’s commitment to providing high-quality, accessible, and patient-centered care in an evolving landscape of healthcare delivery.

Keywords: Africa, urology, diaylsis, healthcare

Procedia PDF Downloads 24
26 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College

Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa

Abstract:

This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.

Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling

Procedia PDF Downloads 199
25 Microfabrication and Non-Invasive Imaging of Porous Osteogenic Structures Using Laser-Assisted Technologies

Authors: Irina Alexandra Paun, Mona Mihailescu, Marian Zamfirescu, Catalin Romeo Luculescu, Adriana Maria Acasandrei, Cosmin Catalin Mustaciosu, Roxana Cristina Popescu, Maria Dinescu

Abstract:

A major concern in bone tissue engineering is to develop complex 3D architectures that mimic the natural cells environment, facilitate the cells growth in a defined manner and allow the flow transport of nutrients and metabolic waste. In particular, porous structures of controlled pore size and positioning are indispensable for growing human-like bone structures. Another concern is to monitor both the structures and the seeded cells with high spatial resolution and without interfering with the cells natural environment. The present approach relies on laser-based technologies employed for fabricating porous biomimetic structures that support the growth of osteoblast-like cells and for their non-invasive 3D imaging. Specifically, the porous structures were built by two photon polymerization –direct writing (2PP_DW) of the commercially available photoresists IL-L780, using the Photonic Professional 3D lithography system. The structures consist of vertical tubes with micrometer-sized heights and diameters, in a honeycomb-like spatial arrangement. These were fabricated by irradiating the IP-L780 photoresist with focused laser pulses with wavelength centered at 780 nm, 120 fs pulse duration and 80 MHz repetition rate. The samples were precisely scanned in 3D by piezo stages. The coarse positioning was done by XY motorized stages. The scanning path was programmed through a writing language (GWL) script developed by Nanoscribe. Following laser irradiation, the unexposed regions of the photoresist were washed out by immersing the samples in the Propylene Glycol Monomethyl Ether Acetate (PGMEA). The porous structures were seeded with osteoblast like MG-63 cells and their osteogenic potential was tested in vitro. The cell-seeded structures were analyzed in 3D using the digital holographic microscopy technique (DHM). DHM is a marker free and high spatial resolution imaging tool, where the hologram acquisition is performed non-invasively i.e. without interfering with the cells natural environment. Following hologram recording, a digital algorithm provided a 3D image of the sample, as well as information about its refractive index, which is correlated with the intracellular content. The axial resolution of the images went down to the nanoscale, while the temporal scales ranged from milliseconds up to hours. The hologram did not involve sample scanning and the whole image was available in one frame recorded going over 200μm field of view. The digital holograms processing provided 3D quantitative information on the porous structures and allowed a quantitative analysis of the cellular response in respect to the porous architectures. The cellular shape and dimensions were found to be influenced by the underlying micro relief. Furthermore, the intracellular content gave evidence on the beneficial role of the porous structures in promoting osteoblast differentiation. In all, the proposed laser-based protocol emerges as a promising tool for the fabrication and non-invasive imaging of porous constructs for bone tissue engineering. Acknowledgments: This work was supported by a grant of the Romanian Authority for Scientific Research and Innovation, CNCS-UEFISCDI, project PN-II-RU-TE-2014-4-2534 (contract 97 from 01/10/2015) and by UEFISCDI PN-II-PT-PCCA no. 6/2012. A part of this work was performed in the CETAL laser facility, supported by the National Program PN 16 47 - LAPLAS IV.

Keywords: biomimetic, holography, laser, osteoblast, two photon polymerization

Procedia PDF Downloads 249
24 Lack of Regulation Leads to Complexity: A Case Study of the Free Range Chicken Meat Sector in the Western Cape, South Africa

Authors: A. Coetzee, C. F. Kelly, E. Even-Zahav

Abstract:

Dominant approaches to livestock production are harmful to the environment, human health and animal welfare, yet global meat consumption is rising. Sustainable alternative production approaches are therefore urgently required, and ‘free range’ is the main alternative for chicken meat offered in South Africa (and globally). Although the South African Poultry Association provides non-binding guidelines, there is a lack of formal definition and regulation of free range chicken production, meaning it is unclear what this alternative entails and if it is consistently practised (a trend observed globally). The objective of this exploratory qualitative case study is therefore to investigate who and what determines free range chicken. The case study, conducted from a social constructivist worldview, uses semi-structured interviews, photographs and document analysis to collect data. Interviews are conducted with those involved with bringing free range chicken to the market - farmers, chefs, retailers, and regulators. Data is analysed using thematic analysis to establish dominant patterns in the data. The five major themes identified (based on prevalence in data and on achieving the research objective) are: 1) free range means a bird reared with good animal welfare in mind, 2) free range means quality meat, 3) free range means a profitable business, 4) free range is determined by decision makers or by access to markets, and 5) free range is coupled with concerns about the lack of regulation. Unpacking the findings in the context of the literature reveals who and what determines free range. The research uncovers wide-ranging interpretations of ‘free range’, driven by the absence of formal regulation for free range chicken practices and the lack of independent private certification. This means that the term ‘free range’ is socially constructed, thus varied and complex. The case study also shows that whether chicken meat is free range is generally determined by those who have access to markets. Large retailers claim adherence to the internationally recognised Five Freedoms, also include in the South African Poultry Association Code of Good Practice, which others in the sector say are too broad to be meaningful. Producers describe animal welfare concerns as the main driver for how they practice/view free range production, yet these interpretations vary. An additional driver is a focus on human health, which participants achieve mainly through the use of antibiotic-free feed, resulting in what participants regard as higher quality meat. The participants are also strongly driven by business imperatives, with most stating that free range chicken should carry a higher price than conventionally-reared chicken due to increased production costs. Recommendations from this study focus on, inter alia, a need to understand consumers’ perspectives on free range chicken, given that those in the sector claim they are responding to consumer demand, and conducting environmental research such as life cycle assessment studies to establish the true (environmental) sustainability of free range production. At present, it seems the sector mostly responds to social sustainability: human health and animal welfare.

Keywords: chicken meat production, free range, socially constructed, sustainability

Procedia PDF Downloads 126