Search results for: code blue drill
55 Healthcare Utilization and Costs of Specific Obesity Related Health Conditions in Alberta, Canada
Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach
Abstract:
Obesity-related health conditions impose a substantial economic burden on payers due to increased healthcare use. Estimates of healthcare resource use and costs associated with obesity-related comorbidities are needed to inform policies and interventions targeting these conditions. Methods: Adults living with obesity were identified (a procedure-related body mass index code for class 2/3 obesity between 2012 and 2019 in Alberta, Canada; excluding those with bariatric surgery), and outcomes were compared over 1-year (2019/2020) between those who had and did not have specific obesity-related comorbidities. The probability of using a healthcare service (based on the odds ratio of a zero [OR-zero] cost) was compared; 95% confidence intervals (CI) were reported. Logistic regression and a generalized linear model with log link and gamma distribution were used for total healthcare cost comparisons ($CDN); cost ratios and estimated cost differences (95% CI) were reported. Potential socio-demographic and clinical confounders were adjusted for, and incremental cost differences were representative of a referent case. Results: A total of 220,190 adults living with obesity were included; 44% had hypertension, 25% had osteoarthritis, 24% had type-2 diabetes, 17% had cardiovascular disease, 12% had insulin resistance, 9% had chronic back pain, and 4% of females had polycystic ovarian syndrome (PCOS). The probability of hospitalization, ED visit, and ambulatory care was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (hospitalization: 1.8-times [OR-zero: 0.57 [0.55/0.59]] / ED visit: 1.9-times [OR-zero: 0.54 [0.53/0.56]] / ambulatory care visit: 2.4-times [OR-zero: 0.41 [0.40/0.43]]), cardiovascular disease (2.7-times [OR-zero: 0.37 [0.36/0.38]] / 1.9-times [OR-zero: 0.52 [0.51/0.53]] / 2.8-times [OR-zero: 0.36 [0.35/0.36]]), osteoarthritis (2.0-times [OR-zero: 0.51 [0.50/0.53]] / 1.4-times [OR-zero: 0.74 [0.73/0.76]] / 2.5-times [OR-zero: 0.40 [0.40/0.41]]), type-2 diabetes (1.9-times [OR-zero: 0.54 [0.52/0.55]] / 1.4-times [OR-zero: 0.72 [0.70/0.73]] / 2.1-times [OR-zero: 0.47 [0.46/0.47]]), hypertension (1.8-times [OR-zero: 0.56 [0.54/0.57]] / 1.3-times [OR-zero: 0.79 [0.77/0.80]] / 2.2-times [OR-zero: 0.46 [0.45/0.47]]), PCOS (not significant / 1.2-times [OR-zero: 0.83 [0.79/0.88]] / not significant), and insulin resistance (1.1-times [OR-zero: 0.88 [0.84/0.91]] / 1.1-times [OR-zero: 0.92 [0.89/0.94]] / 1.8-times [OR-zero: 0.56 [0.54/0.57]]). After fully adjusting for potential confounders, the total healthcare cost ratio was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (1.54-times [1.51/1.56]), cardiovascular disease (1.45-times [1.43/1.47]), osteoarthritis (1.36-times [1.35/1.38]), type-2 diabetes (1.30-times [1.28/1.31]), hypertension (1.27-times [1.26/1.28]), PCOS (1.08-times [1.05/1.11]), and insulin resistance (1.03-times [1.01/1.04]). Conclusions: Adults with obesity who have specific disease-related health conditions have a higher probability of healthcare use and incur greater costs than those without specific comorbidities; incremental costs are larger when other obesity-related health conditions are not adjusted for. In a specific referent case, hypertension was costliest (44% had this condition with an additional annual cost of $715 [$678/$753]). If these findings hold for the Canadian population, hypertension in persons with obesity represents an estimated additional annual healthcare cost of $2.5 billion among adults living with obesity (based on an adult obesity rate of 26%). Results of this study can inform decision making on investment in interventions that are effective in treating obesity and its complications.Keywords: administrative data, healthcare cost, obesity-related comorbidities, real world evidence
Procedia PDF Downloads 14854 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 10553 Concealing Breast Cancer Status: A Qualitative Study in India
Authors: Shradha Parsekar, Suma Nair, Ajay Bailey, Binu V. S.
Abstract:
Background: Concealing of cancer-related information is seen in many low-and-middle-income countries and may be associated with multiple factors. Comparatively, there is lack of information about, how breast cancers diagnosed women disclose cancer-related information to their social contacts and vice versa. To get more insights on the participant’s experience, opinions, expectations, and attitudes, a qualitative study is a suitable approach. Therefore, this study involving in-depth interviews was planned to lessen this gap. Methods: Interviews were conducted separately among breast cancer patients and their caregivers with semi-structured qualitative interview guide. Purposive and convenient sampling was being used to recruit patients and caregivers, respectively. Ethical clearance and permission from the tertiary hospital were obtained and participants were selected from the Udupi district, Karnataka, India. After obtaining a list of breast cancer diagnosed cases, participants were contacted in person and their willingness to take part in the study was taken. About 39 caregivers and 35 patients belonging to different breast cancer stages were recruited. Interviews were recorded with prior permission. Data was managed by Atlas.ti 8 software. The recordings were transcribed, translated and coded in two cycles. Most of the patients belonged to stage II and III cancer. Codes were grouped together into to whom breast cancer status was concealed to and underneath reason for the same. Main findings: followings are the codes and code families which emerged from the data. 1) Concealing the breast cancer status from social contacts other than close family members (such as extended family, neighbor and friends). Participants perceived the reasons as, a) to avoid questions which people probe (which doesn’t have answers), b) to avoid people paying courtesy visit (to inquire about the health as it is Indian culture to visit the sick person) making it inconvenient for patient and caregivers have to offer something and talk to them, c) to avoid people getting shocked (react as if cancer is different from other diseases) or getting emotional/sad, or getting fear of death d) to avoid getting negative suggestion or talking anything in front of patient as it may affect patient negatively, e) to avoid getting stigmatized, f) to avoid getting obstacle in child’s marriage. 2) Participant concealed the breast cancer status of young children as they perceived that it may a) affect studies, b) affect emotionally, c) children may get scared. 3) Concealing the breast cancer status from patients as the caregivers perceived that they have fear of a) worsening patient’s health, b) patient getting tensed, c) patient getting shocked, and d) patient getting scared. However, some participants stressed important in disclosing the cancer status to social contact/patient to make the people aware of the disease. Conclusion: The news of breast cancer spreads like electricity in the wire, therefore, patient or family avoid it for many reasons. Although, globally, due to physicians’ ethical obligations, there is an inclination towards more disclosure of cancer diagnosis and status of prognosis to the patient. However, it is an ongoing argument whether patient/social contacts should know the status especially in a country like India.Keywords: breast cancer, concealing cancer status, India, qualitative study
Procedia PDF Downloads 13552 User-Centered Design in the Development of Patient Decision Aids
Authors: Ariane Plaisance, Holly O. Witteman, Patrick Michel Archambault
Abstract:
Upon admission to an intensive care unit (ICU), all patients should discuss their wishes concerning life-sustaining interventions (e.g., cardiopulmonary resuscitation (CPR)). Without such discussions, interventions that prolong life at the cost of decreasing its quality may be used without appropriate guidance from patients. We employed user-centered design to adapt an existing decision aid (DA) about CPR to create a novel wiki-based DA adapted to the context of a single ICU and tailored to individual patient’s risk factors. During Phase 1, we conducted three weeks of ethnography of the decision-making context in our ICU to identify clinician and patient needs for a decision aid. During this time, we observed five dyads of intensivists and patients discussing their wishes concerning life-sustaining interventions. We also conducted semi-structured interviews with the attending intensivists in this ICU. During Phase 2, we conducted three rounds of rapid prototyping involving 15 patients and 11 other allied health professionals. We recorded discussions between intensivists and patients and used a standardized observation grid to collect patients’ comments and sociodemographic data. We applied content analysis to field notes, verbatim transcripts and the completed observation grids. Each round of observations and rapid prototyping iteratively informed the design of the next prototype. We also used the programming architecture of a wiki platform to embed the GO-FAR prediction rule programming code that we linked to a risk graphics software to better illustrate outcome risks calculated. During Phase I, we identified the need to add a section in our DA concerning invasive mechanical ventilation in addition to CPR because both life-sustaining interventions were often discussed together by physicians. During Phase II, we produced a context-adapted decision aid about CPR and mechanical ventilation that includes a values clarification section, questions about the patient’s functional autonomy prior to admission to the ICU and the functional decline that they would judge acceptable upon hospital discharge, risks and benefits of CPR and invasive mechanical ventilation, population-level statistics about CPR, a synthesis section to help patients come to a final decision and an online calculator based on the GO-FAR prediction rule. Even though the three rounds of rapid prototyping led to simplifying the information in our DA, 60% (n= 3/5) of the patients involved in the last cycle still did not understand the purpose of the DA. We also identified gaps in the discussion and documentation of patients’ preferences concerning life-sustaining interventions (e.g.,. CPR, invasive mechanical ventilation). The final version of our DA and our online wiki-based GO-FAR risk calculator using the IconArray.com risk graphics software are available online at www.wikidecision.org and are ready to be adapted to other contexts. Our results inform producers of decision aids on the use of wikis and user-centered design to develop DAs that are better adapted to users’ needs. Further work is needed on the creation of a video version of our DA. Physicians will also need the training to use our DA and to develop shared decision-making skills about goals of care.Keywords: ethnography, intensive care units, life-sustaining therapies, user-centered design
Procedia PDF Downloads 35451 Using Business Interactive Games to Improve Management Skills
Authors: Nuno Biga
Abstract:
Continuous processes’ improvement is a permanent challenge for managers of any organization. Lean management means that efficiency gains can be obtained through a systematic framework able to explore synergies between processes, eliminate waste of time, and other resources. Leaderships in organizations determine the efficiency of the teams through their influence on collaborators, their motivation, and consolidation of ownership (group) feeling. The “organization health” depends on the leadership style, which is directly influenced by the intrinsic characteristics of each personality and leadership ability (leadership competencies). Therefore, it’s important that managers can correct in advance any deviation from expected leadership exercises. Top management teams must assume themselves as regulatory agents of leadership within the organization, ensuring monitoring of actions and the alignment of managers in accordance with the humanist standards anchored in a visible Code of Ethics and Conduct. This article is built around an innovative model of “Business Interactive Games” (BI GAMES) that simulates a real-life management environment. It shows that the strategic management of operations depends on a complex set of endogenous and exogenous variables to the intervening agents that require specific skills and a set of critical processes to monitor. BI GAMES are designed for each management reality and have already been applied successfully in several contexts over the last five years comprising the educational and enterprise ones. Results from these experiences are used to demonstrate how serious games in working living labs contributed to improve the organizational environment by focusing on the evaluation of players’ (agents’) skills, empower its capabilities, and the critical factors that create value in each context. The implementation of the BI GAMES simulator highlights that leadership skills are decisive for the performance of teams, regardless of the sector of activity and the specificities of each organization whose operation is intended to simulate. The players in the BI GAMES can be managers or employees of different roles in the organization or students in the learning context. They interact with each other and are asked to decide/make choices in the presence of several options for the follow-up operation, for example, when the costs and benefits are not fully known but depend on the actions of external parties (e.g., subcontracted enterprises and actions of regulatory bodies). Each team must evaluate resources used/needed in each operation, identify bottlenecks in the system of operations, assess the performance of the system through a set of key performance indicators, and set a coherent strategy to improve efficiency. Through the gamification and the serious games approach, organizational managers will be able to confront the scientific approach in strategic decision-making versus their real-life approach based on experiences undertaken. Considering that each BI GAME’s team has a leader (chosen by draw), the performance of this player has a direct impact on the results obtained. Leadership skills are thus put to the test during the simulation of the functioning of each organization, allowing conclusions to be drawn at the end of the simulation, including its discussion amongst participants.Keywords: business interactive games, gamification, management empowerment skills, simulation living labs
Procedia PDF Downloads 11250 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application
Authors: Ramesh P., Aby Joseph
Abstract:
Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling
Procedia PDF Downloads 13349 Innocent Victims and Immoral Women: Sex Workers in the Philippines through the Lens of Mainstream Media
Authors: Sharmila Parmanand
Abstract:
This paper examines dominant media representations of prostitution in the Philippines and interrogates sex workers’ interactions with the media establishment. This analysis of how sex workers are constituted in media, often as both innocent victims and immoral actors, contributes to an understanding of public discourse on sex work in the Philippines, where decriminalisation has recently been proposed and sex workers are currently classified as potential victims under anti-trafficking laws but also as criminals under the penal code. The first part is an analysis of media coverage of two prominent themes on prostitution: first, raid and rescue operations conducted by law enforcement; and second, prostitution on military bases and tourism hotspots. As a result of pressure from activists and international donors, these two themes often define the policy conversations on sex work in the Philippines. The discourses in written and televised news reports and documentaries from established local and international media sources that address these themes are explored through content analysis. Conclusions are drawn based on specific terms commonly used to refer to sex workers, how sex workers are seen as performing their cultural roles as mothers and wives, how sex work is depicted, associations made between sex work and public health, representations of clients and managers and ‘rescuers’ such as the police, anti-trafficking organisations, and faith-based groups, and which actors are presumed to be issue experts. Images of how prostitution is used as a metaphor for relations between the Philippines and foreign nations are also deconstructed, along with common tropes about developing world female subjects. In general, sex workers are simultaneously portrayed as bad mothers who endanger their family’s morality but also as long-suffering victims who endure exploitation for the sake of their children. They are also depicted as unclean, drug-addicted threats to public health. Their managers and clients are portrayed as cold, abusive, and sometimes violent, and their rescuers as moral and altruistic agents who are essential for sex workers’ rehabilitation and restoration as virtuous citizens. The second part explores sex workers’ own perceptions of their interactions with media, through interviews with members of the Philippine Sex Workers Collective, a loose organisation of sex workers around the Philippines. They reveal that they are often excluded by media practitioners and that they do not feel that they have space for meaningful self-revelation about their work when they do engage with journalists, who seem to have an overt agenda of depicting them as either victims or women of loose morals. In their assessment, media narratives do not necessarily reflect their lived experiences, and in some cases, coverage of rescues and raid operations endangers their privacy and instrumentalises their suffering. Media representations of sex workers may produce subject positions such as ‘victims’ or ‘criminals’ and legitimize specific interventions while foreclosing other ways of thinking. Further, in light of media’s power to reflect and shape public consciousness, it is a valuable academic and political project to examine whether sex workers are able to assert agency in determining how they are represented.Keywords: discourse analysis, news media, sex work, trafficking
Procedia PDF Downloads 39348 Risking Injury: Exploring the Relationship between Risk Propensity and Injuries among an Australian Rules Football Team
Authors: Sarah A. Harris, Fleur L. McIntyre, Paola T. Chivers, Benjamin G. Piggott, Fiona H. Farringdon
Abstract:
Australian Rules Football (ARF) is an invasion based, contact field sport with over one million participants. The contact nature of the game increases exposure to all injuries, including head trauma. Evidence suggests that both concussion and sub-concussive traumas such as head knocks may damage the brain, in particular the prefrontal cortex. The prefrontal cortex may not reach full maturity until a person is in their early twenties with males taking longer to mature than females. Repeated trauma to the pre-frontal cortex during maturation may lead to negative social, cognitive and emotional effects. It is also during this period that males exhibit high levels of risk taking behaviours. Risk propensity and the incidence of injury is an unexplored area of research. Little research has considered if the level of player’s (especially younger players) risk propensity in everyday life places them at an increased risk of injury. Hence the current study, investigated if a relationship exists between risk propensity and self-reported injuries including diagnosed concussion and head knocks, among male ARF players aged 18 to 31 years. Method: The study was conducted over 22 weeks with one West Australian Football League (WAFL) club during the 2015 competition. Pre-season risk propensity was measured using the 7-item self-report Risk Propensity Scale. Possible scores ranged from 9 to 63, with higher scores indicating higher risk propensity. Players reported their self-perceived injuries (concussion, head knocks, upper body and lower body injuries) fortnightly using the WAFL Injury Report Survey (WIRS). A unique ID code was used to ensure player anonymity, which also enabled linkage of survey responses and injury data tracking over the season. A General Linear Model (GLM) was used to analyse whether there was a relationship between risk propensity score and total number of injuries for each injury type. Results: Seventy one players (N=71) with an age range of 18.40 to 30.48 years and a mean age of 21.92 years (±2.96 years) participated in the study. Player’s mean risk propensity score was 32.73, SD ±8.38. Four hundred and ninety five (495) injuries were reported. The most frequently reported injury was head knocks representing 39.19% of total reported injuries. The GLM identified a significant relationship between risk propensity and head knocks (F=4.17, p=.046). No other injury types were significantly related to risk propensity. Discussion: A positive relationship between risk propensity and head trauma in contact sports (specifically WAFL) was discovered. Assessing player’s risk propensity therefore, may identify those more at risk of head injuries. Potentially leading to greater monitoring and education of these players throughout the season, regarding self-identification of head knocks and symptoms that may indicate trauma to the brain. This is important because many players involved in WAFL are in their late teens or early 20’s hence, may be at greater risk of negative outcomes if they experience repeated head trauma. Continued education and research into the risks associated with head injuries has the potential to improve player well-being.Keywords: football, head injuries, injury identification, risk
Procedia PDF Downloads 33347 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding
Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta
Abstract:
Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration
Procedia PDF Downloads 16046 Concentration of Droplets in a Transient Gas Flow
Authors: Timur S. Zaripov, Artur K. Gilfanov, Sergei S. Sazhin, Steven M. Begg, Morgan R. Heikal
Abstract:
The calculation of the concentration of inertial droplets in complex flows is encountered in the modelling of numerous engineering and environmental phenomena; for example, fuel droplets in internal combustion engines and airborne pollutant particles. The results of recent research, focused on the development of methods for calculating concentration and their implementation in the commercial CFD code, ANSYS Fluent, is presented here. The study is motivated by the investigation of the mixture preparation processes in internal combustion engines with direct injection of fuel sprays. Two methods are used in our analysis; the Fully Lagrangian method (also known as the Osiptsov method) and the Eulerian approach. The Osiptsov method predicts droplet concentrations along path lines by solving the equations for the components of the Jacobian of the Eulerian-Lagrangian transformation. This method significantly decreases the computational requirements as it does not require counting of large numbers of tracked droplets as in the case of the conventional Lagrangian approach. In the Eulerian approach the average droplet velocity is expressed as a function of the carrier phase velocity as an expansion over the droplet response time and transport equation can be solved in the Eulerian form. The advantage of the method is that droplet velocity can be found without solving additional partial differential equations for the droplet velocity field. The predictions from the two approaches were compared in the analysis of the problem of a dilute gas-droplet flow around an infinitely long, circular cylinder. The concentrations of inertial droplets, with Stokes numbers of 0.05, 0.1, 0.2, in steady-state and transient laminar flow conditions, were determined at various Reynolds numbers. In the steady-state case, flows with Reynolds numbers of 1, 10, and 100 were investigated. It has been shown that the results predicted using both methods are almost identical at small Reynolds and Stokes numbers. For larger values of these numbers (Stokes — 0.1, 0.2; Reynolds — 10, 100) the Eulerian approach predicted a wider spread in concentration in the perturbations caused by the cylinder that can be attributed to the averaged droplet velocity field. The transient droplet flow case was investigated for a Reynolds number of 200. Both methods predicted a high droplet concentration in the zones of high strain rate and low concentrations in zones of high vorticity. The maxima of droplet concentration predicted by the Osiptsov method was up to two orders of magnitude greater than that predicted by the Eulerian method; a significant variation for an approach widely used in engineering applications. Based on the results of these comparisons, the Osiptsov method has resulted in a more precise description of the local properties of the inertial droplet flow. The method has been applied to the analysis of the results of experimental observations of a liquid gasoline spray at representative fuel injection pressure conditions. The preliminary results show good qualitative agreement between the predictions of the model and experimental data.Keywords: internal combustion engines, Eulerian approach, fully Lagrangian approach, gasoline fuel sprays, droplets and particle concentrations
Procedia PDF Downloads 25745 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms
Authors: Abdul Rehman, Bo Liu
Abstract:
Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization
Procedia PDF Downloads 22544 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy
Authors: M. Regina Carreira-Lopez
Abstract:
Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.Keywords: hypernymy, information retrieval, lightweight ontology, resonance
Procedia PDF Downloads 12543 Exploring a Cross-Sectional Analysis Defining Social Work Leadership Competencies in Social Work Education and Practice
Authors: Trevor Stephen, Joshua D. Aceves, David Guyer, Jona Jacobson
Abstract:
As a profession, social work has much to offer individuals, groups, and organizations. A multidisciplinary approach to understanding and solving complex challenges and a commitment to developing and training ethical practitioners outlines characteristics of a profession embedded with leadership skills. This presentation will take an overview of the historical context of social work leadership, examine social work as a unique leadership model composed of its qualities and theories that inform effective leadership capability as it relates to our code of ethics. Reflect critically on leadership theories and their foundational comparison. Finally, a look at recommendations and implementation to social work education and practice. Similar to defining leadership, there is no universally accepted definition of social work leadership. However, some distinct traits and characteristics are essential. Recent studies help set the stage for this research proposal because they measure views on effective social work leadership among social work and non-social leaders and followers. However, this research is interested in working backward from that approach and examining social workers' leadership preparedness perspectives based solely on social work training, competencies, values, and ethics. Social workers understand how to change complex structures and challenge resistance to change to improve the well-being of organizations and those they serve. Furthermore, previous studies align with the idea of practitioners assessing their skill and capacity to engage in leadership but not to lead. In addition, this research is significant because it explores aspiring social work leaders' competence to translate social work practice into direct leadership skills. The research question seeks to answer whether social work training and competencies are sufficient to determine whether social workers believe they possess the capacity and skill to engage in leadership practice. Aim 1: Assess whether social workers have the capacity and skills to assume leadership roles. Aim 2: Evaluate how the development of social workers is sufficient in defining leadership. This research intends to reframe the misconception that social workers do not possess the capacity and skills to be effective leaders. On the contrary, social work encompasses a framework dedicated to lifelong development and growth. Social workers must be skilled, competent, ethical, supportive, and empathic. These are all qualities and traits of effective leadership, whereas leaders are in relation with others and embody partnership and collaboration with followers and stakeholders. The proposed study is a cross-sectional quasi-experimental survey design that will include the distribution of a multi-level social work leadership model and assessment tool. The assessment tool aims to help define leadership in social work using a Likert scale model. A cross-sectional research design is appropriate for answering the research questions because the measurement survey will help gather data using a structured tool. Other than the proposed social work leadership measurement tool, there is no other mechanism based on social work theory and designed to measure the capacity and skill of social work leadership.Keywords: leadership competencies, leadership education, multi-level social work leadership model, social work core values, social work leadership, social work leadership education, social work leadership measurement tool
Procedia PDF Downloads 17242 Special Educational Needs Coordinators in England: Changemakers in Mainstream School Settings
Authors: Saneeya Qureshi
Abstract:
This paper reports doctoral research into the impact of Special Educational Needs Coordinators (SENCOs) on teachers in England, UK. Since 1994, it has been compulsory for all mainstream schools in the UK to have a SENCO who co-ordinates assessment and provision for supporting pupils with Special Educational Needs (SEN), helping teachers to develop and implement optimal SEN planning and resources. SENCOs’ roles have evolved as various policies continually redefined SEN provision, impacting their positioning within the school hierarchical structure. SENCOs in England are increasingly recognised as key members of school senior management teams. In this paper, It will be argued that despite issues around the transformative ‘professionalisation’ of their role, and subsequent conflict around boundaries and power relations, SENCOs enhance teachers’ abilities in terms of delivering optimal SEN provision. There is a significant international dimension to the issue: a similar role in respect of SEN management already exists in countries such as Ireland, Finland and Singapore, whilst in other countries, such as Italy and India, the introduction of a role similar to that of a SENCO is currently under discussion. The research question addressed is: do SENCOs enhance teachers’ abilities to be effective teachers of children with Special Educational Needs? The theoretical framework of the project is that of interpretivism, as it is acknowledged that there are contexts and realities are social constructions. The study applied a mixed method approach consisting of two phases. The first phase involved a purposive survey (n=42) of 223 primary school SENCOs, which enabled a deeper insight into SENCOs’ perceptions of their roles in relation to teachers. The second phase consisted of semi-structured interviews (n=36) of SENCOs, teachers and head teachers, in addition to school SEN-related documentation scrutiny. ‘Trustworthiness’ was accomplished through data and methodological triangulation, in addition to a rigorous process of coding and thematic analysis. The research was informed by an Ethical Code as per national guidelines. Research findings point to the evolutionary aspect of the SENCO role having engendered a culture of expectations amongst practitioners, as SENCOs transition from being ‘fixers’ to being ‘enablers’ of teachers. Outcomes indicate that SENCOs can empower teaching staff through the dissemination of specialist knowledge. However, there must be resources clearly identified for such dissemination to take place. It is imperative that both SENCOs and teachers alike address the issue of absolution of responsibility that arises when the ownership and accountability for the planning and implementation of SEN provision are not clarified so as to ensure the promotion of a positive school ethos around inclusive practices. Optimal outcomes through effective SEN interventions and teaching practices are positively correlated with the inclusion of teachers in the planning and execution of SEN provisions. An international audience can consider how the key findings are being manifest in a global context, with reference to their own educational settings. Research outcomes can aid the development of specific competencies needed to shape optimal inclusive educational settings in accordance with the official global priorities pertaining to inclusion.Keywords: inclusion, school professionals, school leadership, special educational needs (SEN), special educational needs coordinators (SENCOs)
Procedia PDF Downloads 19441 Collagen/Hydroxyapatite Compositions Doped with Transitional Metals for Bone Tissue Engineering Applications
Authors: D. Ficai, A. Ficai, D. Gudovan, I. A. Gudovan, I. Ardelean, R. Trusca, E. Andronescu, V. Mitran, A. Cimpean
Abstract:
In the last years, scientists struggled hardly to mimic bone structures to develop implants and biostructures which present higher biocompatibility and reduced rejection rate. One way to obtain this goal is to use similar materials as that of bone, namely collagen/hydroxyapatite composite materials. However, it is very important to tailor both compositions but also the microstructure of the bone that would ensure both the optimal osteointegartion and the mechanical properties required by the application. In this study, new collagen/hydroxyapatites composite materials doped with Cu, Li, Mn, Zn were successfully prepared. The synthesis method is described below: weight the Ca(OH)₂ mass, i.e., 7,3067g, and ZnCl₂ (0.134g), CuSO₄ (0.159g), LiCO₃ (0.133g), MnCl₂.4H₂O (0.1971g), and suspend in 100ml distilled water under magnetic stirring. The solution thus obtained is added a solution of NaH₂PO₄*H2O (8.247g dissolved in 50ml distilled water) under slow dropping of 1 ml/min followed by adjusting the pH to 9.5 with HCl and finally filter and wash until neutral pH. The as-obtained slurry was dried in the oven at 80°C and then calcined at 600°C in order to ensure a proper purification of the final product of organic phases, also inducing a proper sterilization of the mixture before insertion into the collagen matrix. The collagen/hydroxyapatite composite materials are tailored from morphological point of view to optimize their biocompatibility and bio-integration against mechanical properties whereas the addition of the dopants is aimed to improve the biological activity of the samples. The addition of transitional metals can improve the biocompatibility and especially the osteoblasts adhesion (Mn²⁺) or to induce slightly better osteoblast differentiation of the osteoblast, Zn²⁺ being a cofactor for many enzymes including those responsible for cell differentiation. If the amount is too high, the final material can become toxic and lose all of its biocompatibility. In order to achieve a good biocompatibility and not reach the cytotoxic effect, the amount of transitional metals added has to be maintained at low levels (0.5% molar). The amount of transitional metals entering into the elemental cell of HA will be verified using inductively-coupled plasma mass spectrometric system. This highly sensitive technique is necessary, because, at such low levels of transitional metals, the difference between biocompatible and cytotoxic is a very thin line, thus requiring proper and thorough investigation using a precise technique. In order to determine the structure and morphology of the obtained composite materials, IR spectroscopy, X-Ray diffraction (XRD), scanning electron microscopy (SEM), and Energy Dispersive X-Ray Spectrometry (EDS) were used. Acknowledgment: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project “Biomimetic porous structures obtained by 3D printing developed for bone tissue engineering (BIOGRAFTPRINT), No. 127PED/2017 is also highly acknowledged.Keywords: collagen, composite materials, hydroxyapatite, bone tissue engineering
Procedia PDF Downloads 20640 Improved Elastoplastic Bounding Surface Model for the Mathematical Modeling of Geomaterials
Authors: Andres Nieto-Leal, Victor N. Kaliakin, Tania P. Molina
Abstract:
The nature of most engineering materials is quite complex. It is, therefore, difficult to devise a general mathematical model that will cover all possible ranges and types of excitation and behavior of a given material. As a result, the development of mathematical models is based upon simplifying assumptions regarding material behavior. Such simplifications result in some material idealization; for example, one of the simplest material idealization is to assume that the material behavior obeys the elasticity. However, soils are nonhomogeneous, anisotropic, path-dependent materials that exhibit nonlinear stress-strain relationships, changes in volume under shear, dilatancy, as well as time-, rate- and temperature-dependent behavior. Over the years, many constitutive models, possessing different levels of sophistication, have been developed to simulate the behavior geomaterials, particularly cohesive soils. Early in the development of constitutive models, it became evident that elastic or standard elastoplastic formulations, employing purely isotropic hardening and predicated in the existence of a yield surface surrounding a purely elastic domain, were incapable of realistically simulating the behavior of geomaterials. Accordingly, more sophisticated constitutive models have been developed; for example, the bounding surface elastoplasticity. The essence of the bounding surface concept is the hypothesis that plastic deformations can occur for stress states either within or on the bounding surface. Thus, unlike classical yield surface elastoplasticity, the plastic states are not restricted only to those lying on a surface. Elastoplastic bounding surface models have been improved; however, there is still need to improve their capabilities in simulating the response of anisotropically consolidated cohesive soils, especially the response in extension tests. Thus, in this work an improved constitutive model that can more accurately predict diverse stress-strain phenomena exhibited by cohesive soils was developed. Particularly, an improved rotational hardening rule that better simulate the response of cohesive soils in extension. The generalized definition of the bounding surface model provides a convenient and elegant framework for unifying various previous versions of the model for anisotropically consolidated cohesive soils. The Generalized Bounding Surface Model for cohesive soils is a fully three-dimensional, time-dependent model that accounts for both inherent and stress induced anisotropy employing a non-associative flow rule. The model numerical implementation in a computer code followed an adaptive multistep integration scheme in conjunction with local iteration and radial return. The one-step trapezoidal rule was used to get the stiffness matrix that defines the relationship between the stress increment and the strain increment. After testing the model in simulating the response of cohesive soils through extensive comparisons of model simulations to experimental data, it has been shown to give quite good simulations. The new model successfully simulates the response of different cohesive soils; for example, Cardiff Kaolin, Spestone Kaolin, and Lower Cromer Till. The simulated undrained stress paths, stress-strain response, and excess pore pressures are in very good agreement with the experimental values, especially in extension.Keywords: bounding surface elastoplasticity, cohesive soils, constitutive model, modeling of geomaterials
Procedia PDF Downloads 31539 Smart Services for Easy and Retrofittable Machine Data Collection
Authors: Till Gramberg, Erwin Gross, Christoph Birenbaum
Abstract:
This paper presents the approach of the Easy2IoT research project. Easy2IoT aims to enable companies in the prefabrication sheet metal and sheet metal processing industry to enter the Industrial Internet of Things (IIoT) with a low-threshold and cost-effective approach. It focuses on the development of physical hardware and software to easily capture machine activities from on a sawing machine, benefiting various stakeholders in the SME value chain, including machine operators, tool manufacturers and service providers. The methodological approach of Easy2IoT includes an in-depth requirements analysis and customer interviews with stakeholders along the value chain. Based on these insights, actions, requirements and potential solutions for smart services are derived. The focus is on providing actionable recommendations, competencies and easy integration through no-/low-code applications to facilitate implementation and connectivity within production networks. At the core of the project is a novel, non-invasive measurement and analysis system that can be easily deployed and made IIoT-ready. This system collects machine data without interfering with the machines themselves. It does this by non-invasively measuring the tension on a sawing machine. The collected data is then connected and analyzed using artificial intelligence (AI) to provide smart services through a platform-based application. Three Smart Services are being developed within Easy2IoT to provide immediate benefits to users: Wear part and product material condition monitoring and predictive maintenance for sawing processes. The non-invasive measurement system enables the monitoring of tool wear, such as saw blades, and the quality of consumables and materials. Service providers and machine operators can use this data to optimize maintenance and reduce downtime and material waste. Optimize Overall Equipment Effectiveness (OEE) by monitoring machine activity. The non-invasive system tracks machining times, setup times and downtime to identify opportunities for OEE improvement and reduce unplanned machine downtime. Estimate CO2 emissions for connected machines. CO2 emissions are calculated for the entire life of the machine and for individual production steps based on captured power consumption data. This information supports energy management and product development decisions. The key to Easy2IoT is its modular and easy-to-use design. The non-invasive measurement system is universally applicable and does not require specialized knowledge to install. The platform application allows easy integration of various smart services and provides a self-service portal for activation and management. Innovative business models will also be developed to promote the sustainable use of the collected machine activity data. The project addresses the digitalization gap between large enterprises and SME. Easy2IoT provides SME with a concrete toolkit for IIoT adoption, facilitating the digital transformation of smaller companies, e.g. through retrofitting of existing machines.Keywords: smart services, IIoT, IIoT-platform, industrie 4.0, big data
Procedia PDF Downloads 7338 Effectiveness of Simulation Resuscitation Training to Improve Self-Efficacy of Physicians and Nurses at Aga Khan University Hospital in Advanced Cardiac Life Support Courses Quasi-Experimental Study Design
Authors: Salima R. Rajwani, Tazeen Ali, Rubina Barolia, Yasmin Parpio, Nasreen Alwani, Salima B. Virani
Abstract:
Introduction: Nurses and physicians have a critical role in initiating lifesaving interventions during cardiac arrest. It is important that timely delivery of high quality Cardio Pulmonary Resuscitation (CPR) with advanced resuscitation skills and management of cardiac arrhythmias is a key dimension of code during cardiac arrest. It will decrease the chances of patient survival if the healthcare professionals are unable to initiate CPR timely. Moreover, traditional training will not prepare physicians and nurses at a competent level and their knowledge level declines over a period of time. In this regard, simulation training has been proven to be effective in promoting resuscitation skills. Simulation teaching learning strategy improves knowledge level, and skills performance during resuscitation through experiential learning without compromising patient safety in real clinical situations. The purpose of the study is to evaluate the effectiveness of simulation training in Advanced Cardiac Life Support Courses by using the selfefficacy tool. Methods: The study design is a quantitative research design and non-randomized quasi-experimental study design. The study examined the effectiveness of simulation through self-efficacy in two instructional methods; one is Medium Fidelity Simulation (MFS) and second is Traditional Training Method (TTM). The sample size was 220. Data was compiled by using the SPSS tool. The standardized simulation based training increases self-efficacy, knowledge, and skills and improves the management of patients in actual resuscitation. Results: 153 students participated in study; CG: n = 77 and EG: n = 77. The comparison was done between arms in pre and post-test. (F value was 1.69, p value is <0.195 and df was 1). There was no significant difference between arms in the pre and post-test. The interaction between arms was observed and there was no significant difference in interaction between arms in the pre and post-test. (F value was 0.298, p value is <0.586 and df is 1. However, the results showed self-efficacy scores were significantly higher within experimental group in post-test in advanced cardiac life support resuscitation courses as compared to Traditional Training Method (TTM) and had overall (p <0.0001) and F value was 143.316 (mean score was 45.01 and SD was 9.29) verses pre-test result showed (mean score was 31.15 and SD was 12.76) as compared to TTM in post-test (mean score was 29.68 and SD was 14.12) verses pre-test result showed (mean score was 42.33 and SD was 11.39). Conclusion: The standardized simulation-based training was conducted in the safe learning environment in Advanced Cardiac Life Suport Courses and physicians and nurses benefited from self-confidence, early identification of life-threatening scenarios, early initiation of CPR, and provides high-quality CPR, timely administration of medication and defibrillation, appropriate airway management, rhythm analysis and interpretation, and Return of Spontaneous Circulation (ROSC), team dynamics, debriefing, and teaching and learning strategies that will improve the patient survival in actual resuscitation.Keywords: advanced cardiac life support, cardio pulmonary resuscitation, return of spontaneous circulation, simulation
Procedia PDF Downloads 8037 Implementation of Real-World Learning Experiences in Teaching Courses of Medical Microbiology and Dietetics for Health Science Students
Authors: Miriam I. Jimenez-Perez, Mariana C. Orellana-Haro, Carolina Guzman-Brambila
Abstract:
As part of microbiology and dietetics courses, students of medicine and nutrition analyze the main pathogenic microorganisms and perform dietary analyzes. The course of microbiology describes in a general way the main pathogens including bacteria, viruses, fungi, and parasites, as well as their interaction with the human species. We hypothesize that lack of practical application of the course causes the students not to find the value and the clinical application of it when in reality it is a matter of great importance for healthcare in our country. The courses of the medical microbiology and dietetics are mostly theoretical and only a few hours of laboratory practices. Therefore, it is necessary the incorporation of new innovative techniques that involve more practices and community fieldwork, real cases analysis and real-life situations. The purpose of this intervention was to incorporate real-world learning experiences in the instruction of medical microbiology and dietetics courses, in order to improve the learning process, understanding and the application in the field. During a period of 6 months, medicine and nutrition students worked in a community of urban poverty. We worked with 90 children between 4 and 6 years of age from low-income families with no access to medical services, to give an infectious diagnosis related to nutritional status in these children. We expect that this intervention would give a different kind of context to medical microbiology and dietetics students improving their learning process, applying their knowledge and laboratory practices to help a needed community. First, students learned basic skills in microbiology diagnosis test during laboratory sessions. Once, students acquired abilities to make biochemical probes and handle biological samples, they went to the community and took stool samples from children (with the corresponding informed consent). Students processed the samples in the laboratory, searching for enteropathogenic microorganism with RapID™ ONE system (Thermo Scientific™) and parasites using Willis and Malloy modified technique. Finally, they compared the results with the nutritional status of the children, previously measured by anthropometric indicators. The anthropometric results were interpreted by the OMS Anthro software (WHO, 2011). The microbiological result was interpreted by ERIC® Electronic RapID™ Code Compendium software and validated by a physician. The results were analyses of infectious outcomes and nutritional status. Related to fieldwork community learning experiences, our students improved their knowledge in microbiology and were capable of applying this knowledge in a real-life situation. They found this kind of learning useful when they translate theory to a real-life situation. For most of our students, this is their first contact as health caregivers with real population, and this contact is very important to help them understand the reality of many people in Mexico. In conclusion, real-world or fieldwork learning experiences empower our students to have a real and better understanding of how they can apply their knowledge in microbiology and dietetics and help a much- needed population, this is the kind of reality that many people live in our country.Keywords: real-world learning experiences, medical microbiology, dietetics, nutritional status, infectious status.
Procedia PDF Downloads 13236 Worldwide GIS Based Earthquake Information System/Alarming System for Microzonation/Liquefaction and It’s Application for Infrastructure Development
Authors: Rajinder Kumar Gupta, Rajni Kant Agrawal, Jaganniwas
Abstract:
One of the most frightening phenomena of nature is the occurrence of earthquake as it has terrible and disastrous effects. Many earthquakes occur every day worldwide. There is need to have knowledge regarding the trends in earthquake occurrence worldwide. The recoding and interpretation of data obtained from the establishment of the worldwide system of seismological stations made this possible. From the analysis of recorded earthquake data, the earthquake parameters and source parameters can be computed and the earthquake catalogues can be prepared. These catalogues provide information on origin, time, epicenter locations (in term of latitude and longitudes) focal depths, magnitude and other related details of the recorded earthquakes. Theses catalogues are used for seismic hazard estimation. Manual interpretation and analysis of these data is tedious and time consuming. A geographical information system is a computer based system designed to store, analyzes and display geographic information. The implementation of integrated GIS technology provides an approach which permits rapid evaluation of complex inventor database under a variety of earthquake scenario and allows the user to interactively view results almost immediately. GIS technology provides a powerful tool for displaying outputs and permit to users to see graphical distribution of impacts of different earthquake scenarios and assumptions. An endeavor has been made in present study to compile the earthquake data for the whole world in visual Basic on ARC GIS Plate form so that it can be used easily for further analysis to be carried out by earthquake engineers. The basic data on time of occurrence, location and size of earthquake has been compiled for further querying based on various parameters. A preliminary analysis tool is also provided in the user interface to interpret the earthquake recurrence in region. The user interface also includes the seismic hazard information already worked out under GHSAP program. The seismic hazard in terms of probability of exceedance in definite return periods is provided for the world. The seismic zones of the Indian region are included in the user interface from IS 1893-2002 code on earthquake resistant design of buildings. The City wise satellite images has been inserted in Map and based on actual data the following information could be extracted in real time: • Analysis of soil parameters and its effect • Microzonation information • Seismic hazard and strong ground motion • Soil liquefaction and its effect in surrounding area • Impacts of liquefaction on buildings and infrastructure • Occurrence of earthquake in future and effect on existing soil • Propagation of earth vibration due of occurrence of Earthquake GIS based earthquake information system has been prepared for whole world in Visual Basic on ARC GIS Plate form and further extended micro level based on actual soil parameters. Individual tools has been developed for liquefaction, earthquake frequency etc. All information could be used for development of infrastructure i.e. multi story structure, Irrigation Dam & Its components, Hydro-power etc in real time for present and future.Keywords: GIS based earthquake information system, microzonation, analysis and real time information about liquefaction, infrastructure development
Procedia PDF Downloads 31635 Drivers of Global Great Power Assertiveness: Russia and Its Involvement in the Global South
Authors: Elina Vroblevska, Toms Ratfelders
Abstract:
This paper examines the impact of international status-seeking aspirations on great power behavior within the international system. In particular, we seek to test the assumption advanced by the proponents of the Social Identity Theory (SIT) that the inability to achieve social mobilization through joining perceived higher-status social groups (of states) leads great powers to adopt the approach of social competition in which they aim to equal or outdo the dominant group in the area on which its claim to superior status rests. Since the dissolution of the Soviet Union, Russia has struggled to be accepted as a great power by the group of Western states that had created the dominant international system order, while the Soviet states were isolated. While the 1990s and the beginning of the 21st century can be characterized by striving to integrate into the existing order, the second decade has seen a rather sharp turn towards creating a new power center for Russia through the realization of ideas of multipolarity rivalry and uniqueness of the state itself. Increasingly, we have seen the Kremlin striving to collaborate and mobilize groups of states that fall outside of the categories of democracy, multiculturalism, and international order, the way that is perceived by the dominant group, which can be described as the West. Instead, Russia builds its own narrative where it creates an alternative understanding of these values, differentiating from the higher-status social group. The Global South, from a Russian perspective, is the group of states that can still be swayed to create an alternative power center in the international system - one where Russia can assert its status as a great power. This is based on a number of reasons, the most important being that the global north is already highly institutionalized in terms of economy (the EU) and defense (NATO), leaving no room for Russia but to integrate within the existing framework. Second, the difference in values and their interpretation - Russia has been adamant, for the last twenty years, on basing its moral code on traditional values like religion, the heterosexual family model, and moral superiority, which contradict the overall secularism of the Global North. And last, the striking difference in understanding of state governance models - with Russia becoming more autocratic over the course of the last 20 years, it has deliberately created distance between itself and democratic states, entering a “gray area” of alternative understanding of democracy which is more relatable to the global South countries. Using computational text analysis of the excerpts of Vladimir Putin’s speeches delivered from 2000-2022 regarding the areas that fall outside the immediate area of interest of Russia (the Global South), we identify 80 topics that relate to the particular component of the great power status - interest to use force globally. These topics are compared across four temporal frames that capture the periods of more and less permissible Western social boundaries. We find that there exists a negative association between such permissiveness and Putin’s emphasis on the “use of force” topics. This lends further support to the Social Identity Theory and contributes to broadening its applicability to explaining the questions related to great power assertiveness in areas outside of their primary focus regions.Keywords: Russia, Global South, great power, identity
Procedia PDF Downloads 5434 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System
Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim
Abstract:
General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms
Procedia PDF Downloads 39133 Exposing The Invisible
Authors: Kimberley Adamek
Abstract:
According to the Council on Tall Buildings, there has been a rapid increase in the construction of tall or “megatall” buildings over the past two decades. Simultaneously, the New England Journal of Medicine has reported that there has been a steady increase in climate related natural disasters since the 1970s; the eastern expansion of the USA's infamous Tornado Alley being just one of many current issues. In the future, this could mean that tall buildings, which already guide high speed winds down to pedestrian levels would have to withstand stronger forces and protect pedestrians in more extreme ways. Although many projects are required to be verified within wind tunnels and a handful of cities such as San Francisco have included wind testing within building code standards, there are still many examples where wind is only considered for basic loading. This typically results in and an increase of structural expense and unwanted mitigation strategies that are proposed late within a project. When building cities, architects rarely consider how each building alters the invisible patterns of wind and how these alterations effect other areas in different ways later on. It is not until these forces move, overpower and even destroy cities that people take notice. For example, towers have caused winds to blow objects into people (Walkie-Talkie Tower, Leeds, England), cause building parts to vibrate and produce loud humming noises (Beetham Tower, Manchester), caused wind tunnels in streets as well as many other issues. Alternatively, there exist towers which have used their form to naturally draw in air and ventilate entire facilities in order to eliminate the needs for costly HVAC systems (The Met, Thailand) and used their form to increase wind speeds to generate electricity (Bahrain Tower, Dubai). Wind and weather exist and effect all parts of the world in ways such as: Science, health, war, infrastructure, catastrophes, tourism, shopping, media and materials. Working in partnership with a leading wind engineering company RWDI, a series of tests, images and animations documenting discovered interactions of different building forms with wind will be collected to emphasize the possibilities for wind use to architects. A site within San Francisco (due to its increasing tower development, consistently wind conditions and existing strict wind comfort criteria) will host a final design. Iterations of this design will be tested within the wind tunnel and computational fluid dynamic systems which will expose, utilize and manipulate wind flows to create new forms, technologies and experiences. Ultimately, this thesis aims to question the amount which the environment is allowed to permeate building enclosures, uncover new programmatic possibilities for wind in buildings, and push the boundaries of working with the wind to ensure the development and safety of future cities. This investigation will improve and expand upon the traditional understanding of wind in order to give architects, wind engineers as well as the general public the ability to broaden their scope in order to productively utilize this living phenomenon that everyone constantly feels but cannot see.Keywords: wind engineering, climate, visualization, architectural aerodynamics
Procedia PDF Downloads 35832 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry
Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood
Abstract:
The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.Keywords: ADV, experimental data, multiple Reynolds number, post-processing
Procedia PDF Downloads 14831 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example
Authors: Alena Nesterenko, Svetlana Petrikova
Abstract:
Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation
Procedia PDF Downloads 20730 Study on Aerosol Behavior in Piping Assembly under Varying Flow Conditions
Authors: Anubhav Kumar Dwivedi, Arshad Khan, S. N. Tripathi, Manish Joshi, Gaurav Mishra, Dinesh Nath, Naveen Tiwari, B. K. Sapra
Abstract:
In a nuclear reactor accident scenario, a large number of fission products may release to the piping system of the primary heat transport. The released fission products, mostly in the form of the aerosol, get deposited on the inner surface of the piping system mainly due to gravitational settling and thermophoretic deposition. The removal processes in the complex piping system are controlled to a large extent by the thermal-hydraulic conditions like temperature, pressure, and flow rates. These parameters generally vary with time and therefore must be carefully monitored to predict the aerosol behavior in the piping system. The removal process of aerosol depends on the size of particles that determines how many particles get deposit or travel across the bends and reach to the other end of the piping system. The released aerosol gets deposited onto the inner surface of the piping system by various mechanisms like gravitational settling, Brownian diffusion, thermophoretic deposition, and by other deposition mechanisms. To quantify the correct estimate of deposition, the identification and understanding of the aforementioned deposition mechanisms are of great importance. These mechanisms are significantly affected by different flow and thermodynamic conditions. Thermophoresis also plays a significant role in particle deposition. In the present study, a series of experiments were performed in the piping system of the National Aerosol Test Facility (NATF), BARC using metal aerosols (zinc) in dry environments to study the spatial distribution of particles mass and number concentration, and their depletion due to various removal mechanisms in the piping system. The experiments were performed at two different carrier gas flow rates. The commercial CFD software FLUENT is used to determine the distribution of temperature, velocity, pressure, and turbulence quantities in the piping system. In addition to the in-built models for turbulence, heat transfer and flow in the commercial CFD code (FLUENT), a new sub-model PBM (population balance model) is used to describe the coagulation process and to compute the number concentration along with the size distribution at different sections of the piping. In the sub-model coagulation kernels are incorporated through user-defined function (UDF). The experimental results are compared with the CFD modeled results. It is found that most of the Zn particles (more than 35 %) deposit near the inlet of the plenum chamber and a low deposition is obtained in piping sections. The MMAD decreases along the length of the test assembly, which shows that large particles get deposited or removed in the course of flow, and only fine particles travel to the end of the piping system. The effect of a bend is also observed, and it is found that the relative loss in mass concentration at bends is more in case of a high flow rate. The simulation results show that the thermophoresis and depositional effects are more dominating for the small and larger sizes as compared to the intermediate particles size. Both SEM and XRD analysis of the collected samples show the samples are highly agglomerated non-spherical and composed mainly of ZnO. The coupled model framed in this work could be used as an important tool for predicting size distribution and concentration of some other aerosol released during a reactor accident scenario.Keywords: aerosol, CFD, deposition, coagulation
Procedia PDF Downloads 14429 Forming Form, Motivation and Their Biolinguistic Hypothesis: The Case of Consonant Iconicity in Tashelhiyt Amazigh and English
Authors: Noury Bakrim
Abstract:
When dealing with motivation/arbitrariness, forming form (Forma Formans) and morphodynamics are to be grasped as relevant implications of enunciation/enactment, schematization within the specificity of language as sound/meaning articulation. Thus, the fact that a language is a form does not contradict stasis/dynamic enunciation (reflexivity vs double articulation). Moreover, some languages exemplify the role of the forming form, uttering, and schematization (roots in Semitic languages, the Chinese case). Beyond the evolutionary biosemiotic process (form/substance bifurcation, the split between realization/representation), non-isomorphism/asymmetry between linguistic form/norm and linguistic realization (phonetics for instance) opens up a new horizon problematizing the role of Brain – sensorimotor contribution in the continuous forming form. Therefore, we hypothesize biotization as both process/trace co-constructing motivation/forming form. Henceforth, referring to our findings concerning distribution and motivation patterns within Berber written texts (pulse based obstruents and nasal-lateral levels in poetry) and oral storytelling (consonant intensity clustering in quantitative and semantic/prosodic motivation), we understand consonant clustering, motivation and schematization as a complex phenomenon partaking in patterns of oral/written iconic prosody and reflexive metalinguistic representation opening the stable form. We focus our inquiry on both Amazigh and English clusters (/spl/, /spr/) and iconic consonant iteration in [gnunnuy] (to roll/tumble), [smummuy] (to moan sadly or crankily). For instance, the syllabic structures of /splaeʃ/ and /splaet/ imply an anamorphic representation of the state of the world: splash, impact on aquatic surfaces/splat impact on the ground. The pair has stridency and distribution as distinctive features which specify its phonetic realization (and a part of its meaning) /ʃ/ is [+ strident] and /t/ is [+ distributed] on the vocal tract. Schematization is then a process relating both physiology/code as an arthron vocal/bodily, vocal/practical shaping of the motor-articulatory system, leading to syntactic/semantic thematization (agent/patient roles in /spl/, /sm/ and other clusters or the tense uvular /qq/ at the initial position in Berber). Furthermore, the productivity of serial syllable sequencing in Berber points out different expressivity forms. We postulate two Components of motivated formalization: i) the process of memory paradigmatization relating to sequence modeling under sensorimotor/verbal specific categories (production/perception), ii) the process of phonotactic selection - prosodic unconscious/subconscious distribution by virtue of iconicity. Basing on multiple tests including a questionnaire, phonotactic/visual recognition and oral/written reproduction, we aim at patterning/conceptualizing consonant schematization and motivation among EFL and Amazigh (Berber) learners and speakers integrating biolinguistic hypotheses.Keywords: consonant motivation and prosody, language and order of life, anamorphic representation, represented representation, biotization, sensori-motor and brain representation, form, formalization and schematization
Procedia PDF Downloads 14328 Introducing, Testing, and Evaluating a Unified JavaScript Framework for Professional Online Studies
Authors: Caspar Goeke, Holger Finger, Dorena Diekamp, Peter König
Abstract:
Online-based research has recently gained increasing attention from various fields of research in the cognitive sciences. Technological advances in the form of online crowdsourcing (Amazon Mechanical Turk), open data repositories (Open Science Framework), and online analysis (Ipython notebook) offer rich possibilities to improve, validate, and speed up research. However, until today there is no cross-platform integration of these subsystems. Furthermore, implementation of online studies still suffers from the complex implementation (server infrastructure, database programming, security considerations etc.). Here we propose and test a new JavaScript framework that enables researchers to conduct any kind of behavioral research in the browser without the need to program a single line of code. In particular our framework offers the possibility to manipulate and combine the experimental stimuli via a graphical editor, directly in the browser. Moreover, we included an action-event system that can be used to handle user interactions, interactively change stimuli properties or store participants’ responses. Besides traditional recordings such as reaction time, mouse and keyboard presses, the tool offers webcam based eye and face-tracking. On top of these features our framework also takes care about the participant recruitment, via crowdsourcing platforms such as Amazon Mechanical Turk. Furthermore, the build in functionality of google translate will ensure automatic text translations of the experimental content. Thereby, thousands of participants from different cultures and nationalities can be recruited literally within hours. Finally, the recorded data can be visualized and cleaned online, and then exported into the desired formats (csv, xls, sav, mat) for statistical analysis. Alternatively, the data can also be analyzed online within our framework using the integrated Ipython notebook. The framework was designed such that studies can be used interchangeably between researchers. This will support not only the idea of open data repositories but also constitutes the possibility to share and reuse the experimental designs and analyses such that the validity of the paradigms will be improved. Particularly, sharing and integrating the experimental designs and analysis will lead to an increased consistency of experimental paradigms. To demonstrate the functionality of the framework we present the results of a pilot study in the field of spatial navigation that was conducted using the framework. Specifically, we recruited over 2000 subjects with various cultural backgrounds and consequently analyzed performance difference in dependence on the factors culture, gender and age. Overall, our results demonstrate a strong influence of cultural factors in spatial cognition. Such an influence has not yet been reported before and would not have been possible to show without the massive amount of data collected via our framework. In fact, these findings shed new lights on cultural differences in spatial navigation. As a consequence we conclude that our new framework constitutes a wide range of advantages for online research and a methodological innovation, by which new insights can be revealed on the basis of massive data collection.Keywords: cultural differences, crowdsourcing, JavaScript framework, methodological innovation, online data collection, online study, spatial cognition
Procedia PDF Downloads 25727 The Shadowy History of Berlin Underground: 1939-45/Der Schattenmann: Tagebuchaufzeichnungen 1938-1945
Authors: Christine Wiesenthal
Abstract:
This paper asks how to read a particularly vexed and complicated life writing text. For over half a century, the wartime journals of Ruth Andreas Friedrich (1901-1977) circulated as among a handful of more or less authoritative and “authentic” first-hand accounts of German resistance under Hitler. A professional journalist, Andreas Friedrich is remembered today largely through her publications at the war’s end, which appeared in English as Berlin Underground (published by Henry Holt in 1947), just before their publication in Germany as Der Schattenmann “The Shadow Man” (also in 1947). A British edition by the now obscure Latimer House Limited (London) followed in 1948; it is based closely on but is not identical to, the Henry Holt American edition, which in turn differs significantly from its German counterpart. Both Berlin Underground and Der Schattenmann figure Andreas-Friedrich as a key figure in an anti-fascist cell that operated in Berlin under the code name “Uncle Emil,” and provide a riveting account of political terror, opportunism, and dissent under the Nazi regime. Recent scholars have, however, begun to raise fascinating and controversial questions about Andreas-Friedrich’s own writing/reconstruction process in compiling the journals and about her highly selective curatorial role and claims. The apparent absence of any surviving original manuscript for Andreas-Friedrich’s journals amplifies the questions around them. Crucially, so too does the role of the translator of the English editions of Berlin Underground, the enigmatic June Barrows Mussey, a subject that has thus far gone virtually unnoticed and which this paper will focus on. Mussey, a prolific American translator, simultaneously cultivated a career as a professional magician, publishing a number of books on that subject under the alias Henry Hay. While the record indicates that Mussey attempted to compartmentalize his professional life, research into the publishing and translation history of Berlin Underground suggests that the two roles converge in the fact of the translator’s invisibility, by effacing the traces of his own hand and leaving unmarked his own significant textual interventions, Mussey, in effect, edited, abridged, and altered Andreas Friedrich’s journals for the second time. In fact, it could be said that while the fictitious “Uncle Emil” is positioned as “the shadow man” of the German edition, Mussey himself also emerges as precisely that in the English rendering of the journals. The implications of Mussey’s translation of Andreas Friedrich’s journals are one of the most important un-examined gaps in the shadowy publishing history of Berlin Underground, a history full of “tricks” (Mussey’s words) and illusions of transparency. Based largely on archival research of unpublished materials and methods of close reading and comparative analysis, this study will seek to convey some preliminary insights and exploratory work and frame questions toward what is ultimately envisioned as an experimental project in poetic historiography. As this work is still in the early stages, it would be especially welcome to have the opportunity provided by this conference to connect with a community of life writing colleagues who might help think through some of the challenges and possibilities that lie ahead.Keywords: women’s wartime diaries, translation studies, auto/biographical theory, politics of life writing
Procedia PDF Downloads 5426 Dynamic High-Rise Moment Resisting Frame Dissipation Performances Adopting Glazed Curtain Walls with Superelastic Shape Memory Alloy Joints
Authors: Lorenzo Casagrande, Antonio Bonati, Ferdinando Auricchio, Antonio Occhiuzzi
Abstract:
This paper summarizes the results of a survey on smart non-structural element dynamic dissipation when installed in modern high-rise mega-frame prototypes. An innovative glazed curtain wall was designed using Shape Memory Alloy (SMA) joints in order to increase the energy dissipation and enhance the seismic/wind response of the structures. The studied buildings consisted of thirty- and sixty-storey planar frames, extracted from reference three-dimensional steel Moment Resisting Frame (MRF) with outriggers and belt trusses. The internal core was composed of a CBF system, whilst outriggers were placed every fifteen stories to limit second order effects and inter-storey drifts. These structural systems were designed in accordance with European rules and numerical FE models were developed with an open-source code, able to account for geometric and material nonlinearities. With regard to the characterization of non-structural building components, full-scale crescendo tests were performed on aluminium/glass curtain wall units at the laboratory of the Construction Technologies Institute (ITC) of the Italian National Research Council (CNR), deriving force-displacement curves. Three-dimensional brick-based inelastic FE models were calibrated according to experimental results, simulating the fac¸ade response. Since recent seismic events and extreme dynamic wind loads have generated the large occurrence of non-structural components failure, which causes sensitive economic losses and represents a hazard for pedestrians safety, a more dissipative glazed curtain wall was studied. Taking advantage of the mechanical properties of SMA, advanced smart joints were designed with the aim to enhance both the dynamic performance of the single non-structural unit and the global behavior. Thus, three-dimensional brick-based plastic FE models were produced, based on the innovated non-structural system, simulating the evolution of mechanical degradation in aluminium-to-glass and SMA-to-glass connections when high deformations occurred. Consequently, equivalent nonlinear links were calibrated to reproduce the behavior of both tested and smart designed units, and implemented on the thirty- and sixty-storey structural planar frame FE models. Nonlinear time history analyses (NLTHAs) were performed to quantify the potential of the new system, when considered in the lateral resisting frame system (LRFS) of modern high-rise MRFs. Sensitivity to the structure height was explored comparing the responses of the two prototypes. Trends in global and local performance were discussed to show that, if accurately designed, advanced materials in non-structural elements provide new sources of energy dissipation.Keywords: advanced technologies, glazed curtain walls, non-structural elements, seismic-action reduction, shape memory alloy
Procedia PDF Downloads 329