Search results for: integrated value model for sustainability assessment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24919

Search results for: integrated value model for sustainability assessment

259 Border Security: Implementing the “Memory Effect” Theory in Irregular Migration

Authors: Iliuta Cumpanasu, Veronica Oana Cumpanasu

Abstract:

This paper focuses on studying the conjunction between the new emerged theory of “Memory Effect” in Irregular Migration and Related Criminality and the notion of securitization, and its impact on border management, bringing about a scientific advancement in the field by identifying the patterns corresponding to the linkage of the two concepts, for the first time, and developing a theoretical explanation, with respect to the effects of the non-military threats on border security. Over recent years, irregular migration has experienced a significant increase worldwide. The U.N.'s refugee agency reports that the number of displaced people is at its highest ever - surpassing even post-World War II numbers when the world was struggling to come to terms with the most devastating event in history. This is also the fresh reality within the core studied coordinate, the Balkan Route of Irregular Migration, which starts from Asia and Africa and continues to Turkey, Greece, North Macedonia or Bulgaria, Serbia, and ends in Romania, where thousands of migrants find themselves in an irregular situation concerning their entry to the European Union, with its important consequences concerning the related criminality. The data from the past six years was collected by making use of semi-structured interviews with experts in the field of migration and desk research within some organisations involved in border security, pursuing the gathering of genuine insights from the aforementioned field, which was constantly addressed the existing literature and subsequently subjected to the mixed methods of analysis, including the use of the Vector Auto-Regression estimates model. Thereafter, the analysis of the data followed the processes and outcomes in Grounded Theory, and a new Substantive Theory emerged, explaining how the phenomena of irregular migration and cross-border criminality are the decisive impetus for implementing the concept of securitization in border management by using the proposed pattern. The findings of the study are therefore able to capture an area that has not yet benefitted from a comprehensive approach in the scientific community, such as the seasonality, stationarity, dynamics, predictions, or the pull and push factors in Irregular Migration, also highlighting how the recent ‘Pandemic’ interfered with border security. Therefore, the research uses an inductive revelatory theoretical approach which aims at offering a new theory in order to explain a phenomenon, triggering a practically handy contribution for the scientific community, research institutes or Academia and also usefulness to organizational practitioners in the field, among which UN, IOM, UNHCR, Frontex, Interpol, Europol, or national agencies specialized in border security. The scientific outcomes of this study were validated on June 30, 2021, when the author defended his dissertation for the European Joint Master’s in Strategic Border Management, a two years prestigious program supported by the European Commission and Frontex Agency and a Consortium of six European Universities and is currently one of the research objectives of his pending PhD research at the West University Timisoara.

Keywords: migration, border, security, memory effect

Procedia PDF Downloads 92
258 Analysis of Flow Dynamics of Heated and Cooled Pylon Upstream to the Cavity past Supersonic Flow with Wall Heating and Cooling

Authors: Vishnu Asokan, Zaid M. Paloba

Abstract:

Flow over cavities is an important area of research due to the significant change in flow physics caused by cavity aspect ratio, free stream Mach number and the nature of upstream boundary layer approaching the cavity leading edge. Cavity flow finds application in aircraft wheel well, weapons bay, combustion chamber of scramjet engines, etc. These flows are highly unsteady, compressible and turbulent and it involves mass entrainment coupled with acoustics phenomenon. Variation of flow dynamics in an angled cavity with a heated and cooled pylon upstream to the cavity with spatial combinations of heat flux addition and removal to the wall studied numerically. The goal of study is to investigate the effect of energy addition, removal to the cavity walls and pylon cavity flow dynamics. Preliminary steady state numerical simulations on inclined cavities with heat addition have shown that wall pressure profiles, as well as the recirculation, are influenced by heat transfer to the compressible fluid medium. Such a hybrid control of cavity flow dynamics in the form of heat transfer and pylon geometry can open out greater opportunities in enhancement of mixing and flame holding requirements of supersonic combustors. Addition of pylon upstream to the cavity reduces the acoustic oscillations emanating from the geometry. A numerical unsteady analysis of supersonic flow past cavities exposed to cavity wall heating and cooling with heated and cooled pylon helps to get a clear idea about the oscillation suppression in the cavity. A Cavity of L/D 4 and aft wall angle 22 degree with an upstream pylon of h/D=1.5 mm with a wall angle 29 degree exposed to supersonic flow of Mach number 2 and heat flux of 40 W/cm² and -40 W/cm² modeled for the above study. In the preliminary study, the domain is modeled and validated numerically with a turbulence model of SST k-ω using an HLLC implicit scheme. Both qualitative and quantitative flow data extracted and analyzed using advanced CFD tools. Flow visualization is done using numerical Schlieren method as the fluid medium gives the density variation. The heat flux addition to the wall increases the secondary vortex size of the cavity and removal of energy leads to the reduction in vortex size. The flow field turbulence seems to be increasing at higher heat flux. The shear layer thickness increases as heat flux increases. The steady state analysis of wall pressure shows that there is variation on wall pressure as heat flux increases. Shift in frequency of unsteady wall pressure analysis is an interesting observation for the above study. The time averaged skin friction seems to be reducing at higher heat flux due to the variation in viscosity of fluid inside the cavity.

Keywords: energy addition, frequency shift, Numerical Schlieren, shear layer, vortex evolution

Procedia PDF Downloads 143
257 Transformative Economic Policies in India: A Political Economy Analysis of IMF Influence, Sectoral Shifts, and Political Transitions

Authors: Vrajesh Rawal

Abstract:

India's economic landscape has witnessed significant transformations over the past decades, characterized by shifts from agrarian to service-oriented economies. Recently, there has been a growing emphasis on transitioning towards a manufacturing-led growth model driven by factors such as demographic changes, technological advancements, and evolving global trade dynamics. These changes reflect broader efforts to enhance industrialization, boost employment opportunities, and diversify the economic base beyond traditional sectors. Within this context, this research focuses on understanding the specific drivers and dynamics behind India's shift from a predominantly service-based economy to one centered on manufacturing. It seeks to explore how political ideologies influence economic policies and shape sectoral priorities, with a particular focus on contrasting approaches between the Indian National Congress (INC) and the Bharatiya Janata Party (BJP). Additionally, the study evaluates the alignment of IMF policy recommendations with India's economic goals and priorities within the theoretical frameworks of neoliberalism and political economy theory. Despite the extensive literature on India's economic reforms and political economy, there remains a gap in understanding how political ideology influences sectoral shifts and economic policy outcomes, particularly in the context of IMF recommendations. Existing studies often focus narrowly on either political ideologies or economic reforms without fully integrating both perspectives. This research aims to bridge this gap by providing a comprehensive analysis that integrates political economy theories with empirical evidence from political speeches, government documents, and IMF reports. Through qualitative content analysis of speeches by political leaders, document analysis of key governmental documents, and scrutiny of party manifestos, this research demonstrates how political ideologies translate into distinct economic strategies and developmental agendas. It highlights the extent to which IMF policy prescriptions align with India's economic objectives and how these interactions shape broader socio-economic outcomes. The theoretical framework of neoliberalism and political economy theory provides a lens to interpret these findings, offering insights into the complex interplay between economic policies, political ideologies, and institutional frameworks in India. The findings of this study are expected to provide valuable insights for policymakers, researchers, and practitioners involved in economic governance and development planning in India. By understanding the factors driving sectoral shifts and the influence of political ideologies on economic policies, policymakers can make informed decisions to foster sustainable economic growth and development. Implementation of these insights could contribute to refining policy frameworks, enhancing alignment with national development priorities, and optimizing engagement with international financial institutions like the IMF to better meet India's socio-economic challenges and opportunities in the evolving global context.

Keywords: political economy, international politics, social science, policy analysis

Procedia PDF Downloads 32
256 Defense Priming from Egg to Larvae in Litopenaeus vannamei with Non-Pathogenic and Pathogenic Bacteria Strains

Authors: Angelica Alvarez-Lee, Sergio Martinez-Diaz, Jose Luis Garcia-Corona, Humberto Lanz-Mendoza

Abstract:

World aquaculture is always looking for improvements to achieve productions with high yields avoiding the infection by pathogenic agents. The best way to achieve this is to know the biological model to create alternative treatments that could be applied in the hatcheries, which results in greater economic gains and improvements in human public health. In the last decade, immunomodulation in shrimp culture with probiotics, organic acids and different carbon sources has gained great interest, mainly in larval and juvenile stages. Immune priming is associated with a strong protective effect against a later pathogen challenge. This work provides another perspective about immunostimulation from spawning until hatching. The stimulation happens during development embryos and generates resistance to infection by pathogenic bacteria. Massive spawnings of white shrimp L. vannamei were obtained and placed in experimental units with 700 mL of sterile seawater at 30 °C, salinity of 28 ppm and continuous aeration at a density of 8 embryos.mL⁻¹. The immunostimulating effect of three death strains of non-pathogenic bacterial (Escherichia coli, Staphylococcus aureus and Bacillus subtilis) and a pathogenic strain for white shrimp (Vibrio parahaemolyticus) was evaluated. The strains killed by heat were adjusted to O.D. 0.5, at A 600 nm, and directly added to the seawater of each unit at a ratio of 1/100 (v/v). A control group of embryos without inoculum of dead bacteria was kept under the same physicochemical conditions as the rest of the treatments throughout the experiment and used as reference. The duration of the stimulus was 12 hours, then, the larvae that hatched were collected, counted and transferred to a new experimental unit (same physicochemical conditions but at a salinity of 28 ppm) to carry out a challenge of infection against the pathogen V. parahaemolyticus, adding directly to seawater an amount 1/100 (v/v) of the live strain adjusted to an OD 0.5; at A 600 nm. Subsequently, 24 hrs after infection, nauplii survival was evaluated. The results of this work shows that, after 24 hrs, the hatching rates of immunostimulated shrimp embryos with the dead strains of B. subtillis and V. parahaemolyticus are significantly higher compared to the rest of the treatments and the control. Furthermore, survival of L. vanammei after a challenge of infection of 24 hrs against the live strain of V. parahaemolyticus is greater (P < 0.05) in the larvae immunostimulated during the embryonic development with the dead strains B. subtillis and V. parahaemolyticus, followed by those that were treated with E. coli. In summary superficial antigens can stimulate the development cells to promote hatching and can have normal development in agreeing with the optical observations, plus exist a differential response effect between each treatment post-infection. This research provides evidence of the immunostimulant effect of death pathogenic and non-pathogenic bacterial strains in the rate of hatching and oversight of shrimp L. vannamei during embryonic and larval development. This research continues evaluating the effect of these death strains on the expression of genes related to the defense priming in larvae of L. vannamei that come from massive spawning in hatcheries before and after the infection challenge against V. parahaemolyticus.

Keywords: immunostimulation, L. vannamei, hatching, survival

Procedia PDF Downloads 142
255 Monitoring and Evaluation of Master Science Trainee Educational Students to their Practicum in Teaching Physics for Improving and Creating Attitude Skills for Sustainable Developing Upper Secondary Students in Thailand

Authors: T. Santiboon, S. Tongbu, P. S. Saihong

Abstract:

This study focuses on investigating students' perceptions of their physics classroom learning environments of their individualizations and their interactions with the instructional practicum in teaching physics of the master science trainee educational students for improving and creating attitude skills’ sustainable development toward physics for upper secondary educational students in Thailand. Associations between these perceptions and students' attitudes toward physics were also determined. The learning environment perceptions were obtained using the 35-item Physics Laboratory Environment Inventory (PLEI) modified from the original Science Laboratory Environment Inventory. The 25-item Individualized Classroom Environment Questionnaire (ICEQ) was assessed those dimensions which distinguish individualized physics classrooms from convention on individualized open and inquiry-based education Teacher-student interactions were assessed with the 48-item Questionnaires on Teacher Interaction (QTI). Both these questionnaires have an Actual Form (assesses the class as it actually is) and a Preferred Form (asks the students what they would prefer their class to be like - the ideal situation). Students’ creating attitude skills’ sustainable development toward physics were assessed with the Test Of Physics-Related Attitude (TOPRA) modified from the original Test Of Science-Related Attitude (TOSRA) The questionnaires were administered in three phases with the Custer Random Sampling technique to a sample consisted of 989 students in 28 physics classes from 10 schools at the grade 10, 11, and 12 levels in the Secondary Educational Service Area 26 (Maha Sarakham Province) and Area 27 (Roi-Et). Statistically significant differences were found between the students' perceptions of actual-1, actual-2 and preferred environments of their physics laboratory and distinguish individualized classrooms, and teacher interpersonal behaviors with their improving and creating attitudes skills’ sustainable development to their physics classes also were found. Predictions of the monitoring and evaluation of master science trainee educational students of their practicum in teaching physics; students’ skills developments of their physics achievements’ sustainable for the set of actual and preferred environments as a whole and physics related attitudes also were correlated. The R2 values indicate that 58%, 67%, and 84% of the variances in students’ attitudes to their actuale-1, actual-2 and preferred for the PLEI; 42%,science trainee educational students of their practicum in teaching physics; students’ skill developments of their physics achievements’ sustainable for the set of actual and preferred environments as a whole and physics related attitudes also were correlated. The R2 values indicate that 58%, 67%, and 84% of the variances in students’ attitudes to their actuale-1, actual-2 and preferred for the PLEI; 42%, 63%, and 72% for the ICEQ, and 38%, 59%, and 68% for the QTI in physics environment classes were attributable to their perceptions of their actual and preferred physics environments and their developing creative science skills’ sustainable toward physics, consequently. Based on all the findings, suggestions for improving the physics laboratory and individualized classes and teacher interpersonal behaviors with students' perceptions are provided of their improving and creating attitude skills’ sustainable development by the master science trainee educational students ’ instructional administrations.

Keywords: promotion, instructional model, qualitative method, reflective thinking, trainee teacher student

Procedia PDF Downloads 268
254 Wood Dust and Nanoparticle Exposure among Workers during a New Building Construction

Authors: Atin Adhikari, Aniruddha Mitra, Abbas Rashidi, Imaobong Ekpo, Jefferson Doehling, Alexis Pawlak, Shane Lewis, Jacob Schwartz

Abstract:

Building constructions in the US involve numerous wooden structures. Woods are routinely used in walls, framing floors, framing stairs, and making of landings in building constructions. Cross-laminated timbers are currently being used as construction materials for tall buildings. Numerous workers are involved in these timber based constructions, and wood dust is one of the most common occupational exposures for them. Wood dust is a complex substance composed of cellulose, polyoses and other substances. According to US OSHA, exposure to wood dust is associated with a variety of adverse health effects among workers, including dermatitis, allergic respiratory effects, mucosal and nonallergic respiratory effects, and cancers. The amount and size of particles released as wood dust differ according to the operations performed on woods. For example, shattering of wood during sanding operations produces finer particles than does chipping in sawing and milling industries. To our knowledge, how shattering, cutting and sanding of woods and wood slabs during new building construction release fine particles and nanoparticles are largely unknown. General belief is that the dust generated during timber cutting and sanding tasks are mostly large particles. Consequently, little attention has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor and conventional particle counters. This study was conducted in a large new building construction site in southern Georgia primarily during the framing of wooden side walls, inner partition walls, and landings. Exposure levels of nanoparticles (n = 10) were measured by a newly developed nanoparticle counter (TSI NanoScan SMPS Model 3910) at four different distances (5, 10, 15, and 30 m) from the work location. Other airborne particles (number of particles/m3) including PM2.5 and PM10 were monitored using a 6-channel (0.3, 0.5, 1.0, 2.5, 5.0 and 10 µm) particle counter at 15 m, 30 m, and 75 m distances at both upwind and downwind directions. Mass concentration of PM2.5 and PM10 (µg/m³) were measured by using a DustTrak Aerosol Monitor. Temperature and relative humidity levels were recorded. Wind velocity was measured by a hot wire anemometer. Concentration ranges of nanoparticles of 13 particle sizes were: 11.5 nm: 221 – 816/cm³; 15.4 nm: 696 – 1735/cm³; 20.5 nm: 879 – 1957/cm³; 27.4 nm: 1164 – 2903/cm³; 36.5 nm: 1138 – 2640/cm³; 48.7 nm: 938 – 1650/cm³; 64.9 nm: 759 – 1284/cm³; 86.6 nm: 705 – 1019/cm³; 115.5 nm: 494 – 1031/cm³; 154 nm: 417 – 806/cm³; 205.4 nm: 240 – 471/cm³; 273.8 nm: 45 – 92/cm³; and 365.2 nm: Keywords: wood dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 189
253 Language and Power Relations in Selected Political Crisis Speeches in Nigeria: A Critical Discourse Analysis

Authors: Isaiah Ifeanyichukwu Agbo

Abstract:

Human speech is capable of serving many purposes. Power and control are not always exercised overtly by linguistic acts, but maybe enacted and exercised in the myriad of taken-for-granted actions of everyday life. Domination, power control, discrimination and mind control exist in human speech and may lead to asymmetrical power relations. In discourse, there are persuasive and manipulative linguistic acts that serve to establish solidarity and identification with the 'we group' and polarize with the 'they group'. Political discourse is crafted to defend and promote the problematic narrative of outright controversial events in a nation’s history thereby sustaining domination, marginalization, manipulation, inequalities and injustices, often without the dominated and marginalized group being aware of them. They are designed and positioned to serve the political and social needs of the producers. Political crisis speeches in Nigeria, just like in other countries concentrate on positive self-image, de-legitimization of political opponents, reframing accusation to one’s advantage, redefining problematic terms and adopting reversal strategy. In most cases, the people are ignorant of the hidden ideological positions encoded in the text. Few researches have been conducted adopting the frameworks of critical discourse analysis and systemic functional linguistics to investigate this situation in the political crisis speeches in Nigeria. In this paper, we focus attention on the analyses of the linguistic, semantic, and ideological elements in selected political crisis speeches in Nigeria to investigate if they create and sustain unequal power relations and manipulative tendencies from the perspectives of Critical Discourse Analysis (CDA) and Systemic Functional Linguistics (SFL). Critical Discourse Analysis unpacks both opaque and transparent structural relationships of power dominance, power relations and control as manifested in language. Critical discourse analysis emerged from a critical theory of language study which sees the use of language as a form of social practice where social relations are reproduced or contested and different interests are served. Systemic function linguistics relates the structure of texts to their function. Fairclough’s model of CDA and Halliday’s systemic functional approach to language study are adopted in this paper. This paper probes into language use that perpetuates inequalities. This study demystifies the hidden implicature of the selected political crisis speeches and reveals the existence of information that is not made explicit in what the political actors actually say. The analysis further reveals the ideological configurations present in the texts. These ideological standpoints are the basis for naturalizing implicit ideologies and hegemonic influence in the texts. The analyses of the texts further uncovered the linguistic and discursive strategies deployed by text producers to manipulate the unsuspecting members of the public both mentally and conceptually in order to enact, sustain and maintain unhealthy power relations at crisis times in the Nigerian political history.

Keywords: critical discourse analysis, language, political crisis, power relations, systemic functional linguistics

Procedia PDF Downloads 342
252 Using Business Interactive Games to Improve Management Skills

Authors: Nuno Biga

Abstract:

Continuous processes’ improvement is a permanent challenge for managers of any organization. Lean management means that efficiency gains can be obtained through a systematic framework able to explore synergies between processes, eliminate waste of time, and other resources. Leaderships in organizations determine the efficiency of the teams through their influence on collaborators, their motivation, and consolidation of ownership (group) feeling. The “organization health” depends on the leadership style, which is directly influenced by the intrinsic characteristics of each personality and leadership ability (leadership competencies). Therefore, it’s important that managers can correct in advance any deviation from expected leadership exercises. Top management teams must assume themselves as regulatory agents of leadership within the organization, ensuring monitoring of actions and the alignment of managers in accordance with the humanist standards anchored in a visible Code of Ethics and Conduct. This article is built around an innovative model of “Business Interactive Games” (BI GAMES) that simulates a real-life management environment. It shows that the strategic management of operations depends on a complex set of endogenous and exogenous variables to the intervening agents that require specific skills and a set of critical processes to monitor. BI GAMES are designed for each management reality and have already been applied successfully in several contexts over the last five years comprising the educational and enterprise ones. Results from these experiences are used to demonstrate how serious games in working living labs contributed to improve the organizational environment by focusing on the evaluation of players’ (agents’) skills, empower its capabilities, and the critical factors that create value in each context. The implementation of the BI GAMES simulator highlights that leadership skills are decisive for the performance of teams, regardless of the sector of activity and the specificities of each organization whose operation is intended to simulate. The players in the BI GAMES can be managers or employees of different roles in the organization or students in the learning context. They interact with each other and are asked to decide/make choices in the presence of several options for the follow-up operation, for example, when the costs and benefits are not fully known but depend on the actions of external parties (e.g., subcontracted enterprises and actions of regulatory bodies). Each team must evaluate resources used/needed in each operation, identify bottlenecks in the system of operations, assess the performance of the system through a set of key performance indicators, and set a coherent strategy to improve efficiency. Through the gamification and the serious games approach, organizational managers will be able to confront the scientific approach in strategic decision-making versus their real-life approach based on experiences undertaken. Considering that each BI GAME’s team has a leader (chosen by draw), the performance of this player has a direct impact on the results obtained. Leadership skills are thus put to the test during the simulation of the functioning of each organization, allowing conclusions to be drawn at the end of the simulation, including its discussion amongst participants.

Keywords: business interactive games, gamification, management empowerment skills, simulation living labs

Procedia PDF Downloads 112
251 Determination of Gross Alpha and Gross Beta Activity in Water Samples by iSolo Alpha/Beta Counting System

Authors: Thiwanka Weerakkody, Lakmali Handagiripathira, Poshitha Dabare, Thisari Guruge

Abstract:

The determination of gross alpha and beta activity in water is important in a wide array of environmental studies and these parameters are considered in international legislations on the quality of water. This technique is commonly applied as screening method in radioecology, environmental monitoring, industrial applications, etc. Measuring of Gross Alpha and Beta emitters by using iSolo alpha beta counting system is an adequate nuclear technique to assess radioactivity levels in natural and waste water samples due to its simplicity and low cost compared with the other methods. Twelve water samples (Six samples of commercially available bottled drinking water and six samples of industrial waste water) were measured by standard method EPA 900.0 consisting of the gas-less, firm wear based, single sample, manual iSolo alpha beta counter (Model: SOLO300G) with solid state silicon PIPS detector. Am-241 and Sr90/ Y90 calibration standards were used to calibrate the detector. The minimum detectable activities are 2.32mBq/L and 406mBq/L, for alpha and beta activity, respectively. Each of the 2L water samples was evaporated (at low heat) to a small volume and transferred into 50mm stainless steel counting planchet evenly (for homogenization) and heated by IR lamp and the constant weighted residue was obtained. Then the samples were counted for gross alpha and beta. Sample density on the planchet area was maintained below 5mg/cm. Large quantities of solid wastes sludges and waste water are generated every year due to various industries. This water can be reused for different applications. Therefore implementation of water treatment plants and measuring water quality parameters in industrial waste water discharge is very important before releasing them into the environment. This waste may contain different types of pollutants, including radioactive substances. All these measured waste water samples having gross alpha and beta activities, lower than the maximum tolerance limits for industrial waste water discharge of industrial waste in to inland surface water, that is 10-9µCi/mL and 10-8µCi/mL for gross alpha and beta respectively (National Environmental Act, No. 47 of 1980). This is according to extraordinary gazette of the democratic socialist republic of Sri Lanka in February 2008. The measured water samples were below the recommended radioactivity levels and do not pose any radiological hazard when releasing the environment. Drinking water is an essential requirement of life. All the drinking water samples were below the permissible levels of 0.5Bq/L for gross alpha activity and 1Bq/L for gross beta activity. The values have been proposed by World Health Organization in 2011; therefore the water is acceptable for consumption of humans without any further clarification with respect to their radioactivity. As these screening levels are very low, the individual dose criterion (IDC) would usually not be exceeded (0.1mSv y⁻¹). IDC is a criterion for evaluating health risks from long term exposure to radionuclides in drinking water. Recommended level of 0.1mSv/y expressed a very low level of health risk. This monitoring work will be continued further for environmental protection purposes.

Keywords: drinking water, gross alpha, gross beta, waste water

Procedia PDF Downloads 198
250 A Qualitative Study of Experienced Early Childhood Teachers Resolving Workplace Challenges with Character Strengths

Authors: Michael J. Haslip

Abstract:

Character strength application improves performance and well-being in adults across industries, but the potential impact of character strength training among early childhood educators is mostly unknown. To explore how character strengths are applied by early childhood educators at work, a qualitative study was completed alongside professional development provided to a group of in-service teachers of children ages 0-5 in Philadelphia, Pennsylvania, United States. Study participants (n=17) were all female. The majority of participants were non-white, in full-time lead or assistant teacher roles, had at least ten years of experience and a bachelor’s degree. Teachers were attending professional development weekly for 2 hours over a 10-week period on the topic of social and emotional learning and child guidance. Related to this training were modules and sessions on identifying a teacher’s character strength profile using the Values in Action classification of 24 strengths (e.g., humility, perseverance) that have a scientific basis. Teachers were then asked to apply their character strengths to help resolve current workplace challenges. This study identifies which character strengths the teachers reported using most frequently and the nature of the workplace challenges being resolved in this context. The study also reports how difficult these challenges were to the teachers and their success rate at resolving workplace challenges using a character strength application plan. The study also documents how teachers’ own use of character strengths relates to their modeling of these same traits (e.g., kindness, teamwork) for children, especially when the nature of the workplace challenge directly involves the children, such as when addressing issues of classroom management and behavior. Data were collected on action plans (reflective templates) which teachers wrote to explain the work challenge they were facing, the character strengths they used to address the challenge, their plan for applying strengths to the challenge, and subsequent results. Content analysis and thematic analysis were used to investigate the research questions using approaches that included classifying, connecting, describing, and interpreting data reported by educators. Findings reveal that teachers most frequently use kindness, leadership, fairness, hope, and love to address a range of workplace challenges, ranging from low to high difficulty, involving children, coworkers, parents, and for self-management. Teachers reported a 71% success rate at fully or mostly resolving workplace challenges using the action plan method introduced during professional development. Teachers matched character strengths to challenges in different ways, with certain strengths being used mostly when the challenge involved children (love, forgiveness), others mostly with adults (bravery, teamwork), and others universally (leadership, kindness). Furthermore, teacher’s application of character strengths at work involved directly modeling character for children in 31% of reported cases. The application of character strengths among early childhood educators may play a significant role in improving teacher well-being, reducing job stress, and improving efforts to model character for young children.

Keywords: character strengths, positive psychology, professional development, social-emotional learning

Procedia PDF Downloads 105
249 Numerical Study of Homogeneous Nanodroplet Growth

Authors: S. B. Q. Tran

Abstract:

Drop condensation is the phenomenon that the tiny drops form when the oversaturated vapour present in the environment condenses on a substrate and makes the droplet growth. Recently, this subject has received much attention due to its applications in many fields such as thin film growth, heat transfer, recovery of atmospheric water and polymer templating. In literature, many papers investigated theoretically and experimentally in macro droplet growth with the size of millimeter scale of radius. However few papers about nanodroplet condensation are found in the literature especially theoretical work. In order to understand the droplet growth in nanoscale, we perform the numerical simulation work to study nanodroplet growth. We investigate and discuss the role of the droplet shape and monomer diffusion on drop growth and their effect on growth law. The effect of droplet shape is studied by doing parametric studies of contact angle and disjoining pressure magnitude. Besides, the effect of pinning and de-pinning behaviours is also studied. We investigate the axisymmetric homogeneous growth of 10–100 nm single water nanodroplet on a substrate surface. The main mechanism of droplet growth is attributed to the accumulation of laterally diffusing water monomers, formed by the absorption of water vapour in the environment onto the substrate. Under assumptions of quasi-steady thermodynamic equilibrium, the nanodroplet evolves according to the augmented Young–Laplace equation. Using continuum theory, we model the dynamics of nanodroplet growth including the coupled effects of disjoining pressure, contact angle and monomer diffusion with the assumption of constant flux of water monomers at the far field. The simulation result is validated by comparing with the published experimental result. For the case of nanodroplet growth with constant contact angle, our numerical results show that the initial droplet growth is transient by monomer diffusion. When the flux at the far field is small, at the beginning, the droplet grows by the diffusion of initially available water monomers on the substrate and after that by the flux at the far field. In the steady late growth rate of droplet radius and droplet height follow a power law of 1/3, which is unaffected by the substrate disjoining pressure and contact angle. However, it is found that the droplet grows faster in radial direction than high direction when disjoining pressure and contact angle increase. The simulation also shows the information of computational domain effect in the transient growth period. When the computational domain size is larger, the mass coming in the free substrate domain is higher. So the mass coming in the droplet is also higher. The droplet grows and reaches the steady state faster. For the case of pinning and de-pinning droplet growth, the simulation shows that the disjoining pressure does not affect the droplet radius growth law 1/3 in steady state. However the disjoining pressure modifies the growth rate of the droplet height, which then follows a power law of 1/4. We demonstrate how spatial depletion of monomers could lead to a growth arrest of the nanodroplet, as observed experimentally.

Keywords: augmented young-laplace equation, contact angle, disjoining pressure, nanodroplet growth

Procedia PDF Downloads 272
248 Modelling the Art Historical Canon: The Use of Dynamic Computer Models in Deconstructing the Canon

Authors: Laura M. F. Bertens

Abstract:

There is a long tradition of visually representing the art historical canon, in schematic overviews and diagrams. This is indicative of the desire for scientific, ‘objective’ knowledge of the kind (seemingly) produced in the natural sciences. These diagrams will, however, always retain an element of subjectivity and the modelling methods colour our perception of the represented information. In recent decades visualisations of art historical data, such as hand-drawn diagrams in textbooks, have been extended to include digital, computational tools. These tools significantly increase modelling strength and functionality. As such, they might be used to deconstruct and amend the very problem caused by traditional visualisations of the canon. In this paper, the use of digital tools for modelling the art historical canon is studied, in order to draw attention to the artificial nature of the static models that art historians are presented with in textbooks and lectures, as well as to explore the potential of digital, dynamic tools in creating new models. To study the way diagrams of the canon mediate the represented information, two modelling methods have been used on two case studies of existing diagrams. The tree diagram Stammbaum der neudeutschen Kunst (1823) by Ferdinand Olivier has been translated to a social network using the program Visone, and the famous flow chart Cubism and Abstract Art (1936) by Alfred Barr has been translated to an ontological model using Protégé Ontology Editor. The implications of the modelling decisions have been analysed in an art historical context. The aim of this project has been twofold. On the one hand the translation process makes explicit the design choices in the original diagrams, which reflect hidden assumptions about the Western canon. Ways of organizing data (for instance ordering art according to artist) have come to feel natural and neutral and implicit biases and the historically uneven distribution of power have resulted in underrepresentation of groups of artists. Over the last decades, scholars from fields such as Feminist Studies, Postcolonial Studies and Gender Studies have considered this problem and tried to remedy it. The translation presented here adds to this deconstruction by defamiliarizing the traditional models and analysing the process of reconstructing new models, step by step, taking into account theoretical critiques of the canon, such as the feminist perspective discussed by Griselda Pollock, amongst others. On the other hand, the project has served as a pilot study for the use of digital modelling tools in creating dynamic visualisations of the canon for education and museum purposes. Dynamic computer models introduce functionalities that allow new ways of ordering and visualising the artworks in the canon. As such, they could form a powerful tool in the training of new art historians, introducing a broader and more diverse view on the traditional canon. Although modelling will always imply a simplification and therefore a distortion of reality, new modelling techniques can help us get a better sense of the limitations of earlier models and can provide new perspectives on already established knowledge.

Keywords: canon, ontological modelling, Protege Ontology Editor, social network modelling, Visone

Procedia PDF Downloads 127
247 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico

Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos

Abstract:

Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.

Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis

Procedia PDF Downloads 151
246 A Study of Tactics in the Dissident Urban Form

Authors: Probuddha Mukhopadhyay

Abstract:

The infiltration of key elements to the civil structure is foraying its way to reclaim, what is its own. The reclamation of lives and spaces, once challenged, becomes a consistent process of ingress, disguised as parallels to the moving city, disperses into discourses often unheard of and conveniently forgotten. In this age of 'hyper'-urbanization, there are solutions suggested to a plethora of issues faced by citizens, in improving their standards of living. Problems are ancillary to proposals that emerge out of the underlying disorders of the townscape. These interventions result in the formulation of urban policies, to consolidate and optimize, to regularize and to streamline resources. Policy and practice are processes where the politics in policies define the way in which urban solutions are prescribed. Social constraints, that formulate the various cycles of order and disorders within the urban realm, are the stigmas for such interventions. There is often a direct relation of policy to place, no matter how people-centric it may seem to be projected. How we live our lives depends on where we live our lives - a relative statement for urban problems, varies from city to city. Communal compositions, welfare, crisis, socio-economic balance, need for management are the generic roots for urban policy formulation. However, in reality, the gentry administering its environmentalism is the criterion, that shapes and defines the values and expanse of such policies. In relation to the psycho-spatial characteristic of urban spheres with respect to the other side of this game, there have been instances, where the associational values have been reshaped by interests. The public domain reclaimed for exclusivity, thus creating fortified neighborhoods. Here, the citizen cumulative is often drifted by proposals that would over time deplete such landscapes of the city. It is the organized rebellion that in turn formulates further inward looking enclaves of latent aggression. In recent times, it has been observed that the unbalanced division of power and the implied processes of regulating the weak, stem the rebellion who respond in kits and parts. This is a phenomenon that mimics the guerilla warfare tactics, in order to have systems straightened out, either by manipulations or by force. This is the form of the city determined by the various forms insinuated by the state of city wide decisions. This study is an attempt at understanding the way in which development is interpreted by the state and the civil society and the role that community driven processes undertake to reinstate their claims to the city. This is a charter of consolidated patterns of negotiations that tend to counter policies. The research encompasses a study of various contested settlements in two cities of India- Mumbai and Kolkata, tackling dissent through spatial order. The study has been carried out to identify systems - formal and informal, catering to the most challenged interests of the people with respect to their habitat, a model to counter the top-down authoritative framework challenging the legitimacy of such settlements.

Keywords: urban design, insurgence, tactical urbanism, urban governance, civil society, state

Procedia PDF Downloads 147
245 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles

Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo

Abstract:

Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.

Keywords: HRRP, NCTI, simulated/synthetic database, SVD

Procedia PDF Downloads 354
244 An Exploratory Factor and Cluster Analysis of the Willingness to Pay for Last Mile Delivery

Authors: Maximilian Engelhardt, Stephan Seeck

Abstract:

The COVID-19 pandemic is accelerating the already growing field of e-commerce. The resulting urban freight transport volume leads to traffic and negative environmental impact. Furthermore, the service level of parcel logistics service provider is lacking far behind the expectations of consumer. These challenges can be solved by radically reorganize the urban last mile distribution structure: parcels could be consolidated in a micro hub within the inner city and delivered within time windows by cargo bike. This approach leads to a significant improvement of consumer satisfaction with their overall delivery experience. However, this approach also leads to significantly increased costs per parcel. While there is a relevant share of online shoppers that are willing to pay for such a delivery service there are no deeper insights about this target group available in the literature. Being aware of the importance of knowing target groups for businesses, the aim of this paper is to elaborate the most important factors that determine the willingness to pay for sustainable and service-oriented parcel delivery (factor analysis) and to derive customer segments (cluster analysis). In order to answer those questions, a data set is analyzed using quantitative methods of multivariate statistics. The data set was generated via an online survey in September and October 2020 within the five largest cities in Germany (n = 1.071). The data set contains socio-demographic, living-related and value-related variables, e.g. age, income, city, living situation and willingness to pay. In a prior work of the author, the data was analyzed applying descriptive and inference statistical methods that only provided limited insights regarding the above-mentioned research questions. The analysis in an exploratory way using factor and cluster analysis promise deeper insights of relevant influencing factors and segments for user behavior of the mentioned parcel delivery concept. The analysis model is built and implemented with help of the statistical software language R. The data analysis is currently performed and will be completed in December 2021. It is expected that the results will show the most relevant factors that are determining user behavior of sustainable and service-oriented parcel deliveries (e.g. age, current service experience, willingness to pay) and give deeper insights in characteristics that describe the segments that are more or less willing to pay for a better parcel delivery service. Based on the expected results, relevant implications and conclusions can be derived for startups that are about to change the way parcels are delivered: more customer-orientated by time window-delivery and parcel consolidation, more environmental-friendly by cargo bike. The results will give detailed insights regarding their target groups of parcel recipients. Further research can be conducted by exploring alternative revenue models (beyond the parcel recipient) that could compensate the additional costs, e.g. online-shops that increase their service-level or municipalities that reduce traffic on their streets.

Keywords: customer segmentation, e-commerce, last mile delivery, parcel service, urban logistics, willingness-to-pay

Procedia PDF Downloads 107
243 Opportunities and Challenges: Tracing the Evolution of India's First State-led Curriculum-based Media Literacy Intervention

Authors: Ayush Aditya

Abstract:

In today's digitised world, the extent of an individual’s social involvement is largely determined by their interaction over the internet. The Internet has emerged as a primary source of information consumption and a reliable medium for receiving updates on everyday activities. Owing to this change in the information consumption pattern, the internet has also emerged as a hotbed of misinformation. Experts are of the view that media literacy has emerged as one of the most effective strategies for addressing the issue of misinformation. This paper aims to study the evolution of the Kerala government's media literacy policy, its implementation strategy, challenges and opportunities. The objective of this paper is to create a conceptual framework containing details of the implementation strategy based on the Kerala model. Extensive secondary research of literature, newspaper articles, and other online sources was carried out to locate the timeline of this policy. This was followed by semi-structured interview discussions with government officials from Kerala to trace the origin and evolution of this policy. Preliminary findings based on the collected data suggest that this policy is a case of policy by chance, as the officer who headed this policy during the state level implementation was the one who has already piloted a media literacy program in a district called Kannur as the district collector. Through this paper, an attempt is made to trace the history of the media literacy policy starting from the Kannur intervention in 2018, which was started to address the issue of vaccine hesitancy around measles rubella(MR) vaccination. If not for the vaccine hesitancy, this program would not have been rolled out in Kannur. Interviews with government officials suggest that when authorities decided to take up this initiative in 2020, a huge amount of misinformation emerging during the COVID-19 pandemic was the trigger. There was misinformation regarding government orders, healthcare facilities, vaccination, and lockdown regulations, which affected everyone, unlike the case of Kannur, where it was only a certain age group of kids. As a solution to this problem, the state government decided to create a media literacy curriculum to be taught in all government schools of the state starting from standard 8 till graduation. This was a tricky task, as a new course had to be immediately introduced in the school curriculum amid all the disruptions in the education system caused by the pandemic. It was revealed during the interview that in the case of the state-wide implementation, every step involved multiple checks and balances, unlike the earlier program where stakeholders were roped-in as and when the need emerged. On the pedagogy, while the training during the pilot could be managed through PowerPoint presentation, designing a state-wide curriculum involved multiple iterations and expert approvals. The reason for this is COVID-19 related misinformation has lost its significance. In the next phase of the research, an attempt will be made to compare other aspects of the pilot implementation with the state-wide implementation.

Keywords: media literacy, digital media literacy, curriculum based media literacy intervention, misinformation

Procedia PDF Downloads 93
242 Perceptions of Teachers toward Inclusive Education Focus on Hearing Impairment

Authors: Chalise Kiran

Abstract:

The prime idea of inclusive education is to mainstream every child in education. However, it will be challenging for implementation when there are policy and practice gaps. It will be even more challenging when children have disabilities. Generally, the focus will be on the policy gap, but the problem may not always be with policy. The proper practice could be a challenge in the countries like Nepal. In determining practice, the teachers’ perceptions toward inclusive will play a vital role. Nepal has categorized disability in 7 types (physical, visual, hearing, vision/hearing, speech, mental, and multiple). Out of these, hearing impairment is the study realm. In the context of a limited number of researches on children with disabilities and rare researches on CWHI and their education in Nepal, this study is a pioneering effort in knowing basically the problems and challenges of CWHI focused on inclusive education in the schools including gaps and barriers in its proper implementation. Philosophically, the paradigm of the study is post-positivism. In the post-positivist worldview, the quantitative approach with the description of the situation and inferential relationship are revealed out in the study. This is related to the natural model of objective reality. The data were collected from an individual survey with the teachers and head teachers of 35 schools in Nepal. The survey questionnaire was prepared and filled by the respondents from the schools where the CWHI study in 7 provincial 20 districts of Nepal. Through these considerations, the perceptions of CWHI focused inclusive education were explored in the study. The data were analyzed using both descriptive and inferential tools on which the Likert scale-based analysis was done for descriptive analysis, and chi-square mathematical tool was used to know the significant relationship between dependent variables and independent variables. The descriptive analysis showed that the majority of teachers have positive perceptions toward implementing CWHI focused inclusive education, and the majority of them have positive perceptions toward CWHI focused inclusive education, though there are some problems and challenges. The study has found out the major challenges and problems categorically. Some of them are: a large number of students in a single class; availability of generic textbooks for CWHI and no availability of textbooks to all students; less opportunity for teachers to acquire knowledge on CWHI; not adequate teachers in the schools; no flexibility in the curriculum; less information system in schools; no availability of educational consular; disaster-prone students; no child abuse control strategy; no disabled-friendly schools; no free health check-up facility; no participation of the students in school activities and in child clubs and so on. By and large, it is found that teachers’ age, gender, years of experience, position, employment status, and disability with him or her show no statistically significant relation to successfully implement CWHI focused inclusive education and perceptions to CWHI focused inclusive education in schools. However, in some of the cases, the set null hypothesis was rejected, and some are completely retained. The study has suggested policy implications, implications for educational authority, and implications for teachers and parents categorically.

Keywords: children with hearing impairment, disability, inclusive education, perception

Procedia PDF Downloads 112
241 Systemic Family therapy in the Queensland Foster Care System: The implementation of Integrative Practice as a Purposeful Intervention Implemented with Complex ‘Family’ Systems

Authors: Rachel Jones

Abstract:

Systemic Family therapy in the Queensland Foster Care System is the implementation of Integrative Practice as a purposeful intervention implemented with complex ‘family’ systems (by expanding the traditional concept of family to include all relevant stakeholders for a child) and is shown to improve the overall wellbeing of children (with developmental delays and trauma) in Queensland out of home care contexts. The importance of purposeful integrative practice in the field of systemic family therapy has been highlighted in achieving change in complex family systems. Essentially, it is the purposeful use of multiple interventions designed to meet the myriad of competing needs apparent for a child (with developmental delays resulting from early traumatic experiences - both in utero and in their early years) and their family. In the out-of-home care context, integrative practice is particularly useful to promote positive change for the child and what is an extended concept of whom constitutes their family. Traditionally, a child’s family may have included biological and foster care family members, but when this concept is extended to include all their relevant stakeholders (including biological family, foster carers, residential care workers, child safety, school representatives, Health and Allied Health staff, police and youth justice staff), the use of integrative family therapy can produce positive change for the child in their overall wellbeing, development, risk profile, social and emotional functioning, mental health symptoms and relationships across domains. By tailoring therapeutic interventions that draw on systemic family therapies from the first and second-order schools of family therapy, neurobiology, solution focussed, trauma-informed, play and art therapy, and narrative interventions, disability/behavioural interventions, clinicians can promote change by mixing therapeutic modalities with the individual and their stakeholders. This presentation will unpack the implementation of systemic family therapy using this integrative approach to formulation and treatment for a child in out-of-home care in Queensland (experiencing developmental delays resulting from trauma). It considers the need for intervention for the individual and in the context of the environment and relationships. By reviewing a case example, this study aims to highlight the simultaneous and successful use of pharmacological interventions, psychoeducational programs for carers and school staff, parenting programs, cognitive-behavioural and trauma-informed interventions, traditional disability approaches, play therapy, mapping genograms and meaning-making, and using family and dyadic sessions for the system associated with the foster child. These elements of integrative systemic family practice have seen success in the reduction of symptoms and improved overall well-being of foster children and their stakeholders. Accordingly, a model for best practice using this integrative systemic approach is presented for this population group and preliminary findings for this approach over four years of local data have been reviewed.

Keywords: systemic family therapy, treating families of children with delays, trauma and attachment in families systems, improving practice and functioning of children and families

Procedia PDF Downloads 12
240 A Realist Review of Influences of Community-Based Interventions on Noncommunicable Disease Risk Behaviors

Authors: Ifeyinwa Victor-Uadiale, Georgina Pearson, Sophie Witter, D. Reidpath

Abstract:

Introduction: Smoking, alcohol misuse, unhealthy diet, and physical inactivity are the primary drivers of noncommunicable diseases (NCD), including cardiovascular diseases, cancers, respiratory diseases, and diabetes, worldwide. Collectively, these diseases are the leading cause of all global deaths, most of which are premature, affecting people between 30 and 70 years. Empirical evidence suggests that these risk behaviors can be modified by community-based interventions (CBI). However, there is little insight into the mechanisms and contextual factors of successful community interventions that impact risk behaviours for chronic diseases. This study examined “Under what circumstances, for whom, and how, do community-based interventions modify smoking, alcohol use, unhealthy diet, and physical inactivity among adults”. Adopting the Capability (C), Opportunity (O), Motivation (M), Behavior (B) (COM-B) framework for behaviour change, it sought to: (1) identify the mechanisms through which CBIs could reduce tobacco use and alcohol consumption and increase physical activity and the consumption of healthy diets and (2) examine the contextual factors that trigger the impact of these mechanisms on these risk behaviours among adults. Methods: Pawson’s realist review method was used to examine the literature. Empirical evidence and theoretical understanding were combined to develop a realist program theory that explains how CBIs influence NCD risk behaviours. Documents published between 2002 and 2020 were systematically searched in five electronic databases (CINAHL, Cochrane Library, Medline, ProQuest Central, and PsycINFO). They were included if they reported on community-based interventions aimed at cardiovascular diseases, cancers, respiratory diseases, and diabetes in a global context; and had an outcome targeted at smoking, alcohol, physical activity, and diet. Findings: Twenty-nine scientific documents were retrieved and included in the review. Over half of them (n = 18; 62%) focused on three of the four risk behaviours investigated in this review. The review identified four mechanisms: capability, opportunity, motivation, and social support that are likely to change the dietary and physical activity behaviours in adults given certain contexts. There were weak explanations of how the identified mechanisms could likely change smoking and alcohol consumption habits. In addition, eight contextual factors that may affect how these mechanisms impact physical activity and dietary behaviours were identified: suitability to work and family obligations, risk status awareness, socioeconomic status, literacy level, perceived need, availability and access to resources, culture, and group format. Conclusion: The findings suggest that CBIs are likely to improve the physical activity and dietary habits of adults if the intervention function seeks to educate, incentivize, change the environment, and model the right behaviours. The review applies and advances theory, realist research, and the design and implementation of community-based interventions for NCD prevention.

Keywords: community-based interventions, noncommunicable disease, realist program theory, risk behaviors

Procedia PDF Downloads 93
239 Water Ingress into Underground Mine Voids in the Central Rand Goldfields Area, South Africa-Fluid Induced Seismicity

Authors: Artur Cichowicz

Abstract:

The last active mine in the Central Rand Goldfields area (50 km x 15 km) ceased operations in 2008. This resulted in the closure of the pumping stations, which previously maintained the underground water level in the mining voids. As a direct consequence of the water being allowed to flood the mine voids, seismic activity has increased directly beneath the populated area of Johannesburg. Monitoring of seismicity in the area has been on-going for over five years using the network of 17 strong ground motion sensors. The objective of the project is to improve strategies for mine closure. The evolution of the seismicity pattern was investigated in detail. Special attention was given to seismic source parameters such as magnitude, scalar seismic moment and static stress drop. Most events are located within historical mine boundaries. The seismicity pattern shows a strong relationship between the presence of the mining void and high levels of seismicity; no seismicity migration patterns were observed outside the areas of old mining. Seven years after the pumping stopped, the evolution of the seismicity has indicated that the area is not yet in equilibrium. The level of seismicity in the area appears to not be decreasing over time since the number of strong events, with Mw magnitudes above 2, is still as high as it was when monitoring began over five years ago. The average rate of seismic deformation is 1.6x1013 Nm/year. Constant seismic deformation was not observed over the last 5 years. The deviation from the average is in the order of 6x10^13 Nm/year, which is a significant deviation. The variation of cumulative seismic moment indicates that a constant deformation rate model is not suitable. Over the most recent five year period, the total cumulative seismic moment released in the Central Rand Basin was 9.0x10^14 Nm. This is equivalent to one earthquake of magnitude 3.9. This is significantly less than what was experienced during the mining operation. Characterization of seismicity triggered by a rising water level in the area can be achieved through the estimation of source parameters. Static stress drop heavily influences ground motion amplitude, which plays an important role in risk assessments of potential seismic hazards in inhabited areas. The observed static stress drop in this study varied from 0.05 MPa to 10 MPa. It was found that large static stress drops could be associated with both small and large events. The temporal evolution of the inter-event time provides an understanding of the physical mechanisms of earthquake interaction. Changes in the characteristics of the inter-event time are produced when a stress change is applied to a group of faults in the region. Results from this study indicate that the fluid-induced source has a shorter inter-event time in comparison to a random distribution. This behaviour corresponds to a clustering of events, in which short recurrence times tend to be close to each other, forming clusters of events.

Keywords: inter-event time, fluid induced seismicity, mine closure, spectral parameters of seismic source

Procedia PDF Downloads 285
238 The Relationship Between Teachers’ Attachment Insecurity and Their Classroom Management Efficacy

Authors: Amber Hatch, Eric Wright, Feihong Wang

Abstract:

Research suggests that attachment in close relationships affects one’s emotional processes, mindfulness, conflict-management behaviors, and interpersonal interactions. Attachment insecurity is often associated with maladaptive social interactions and suboptimal relationship qualities. Past studies have considered how the nature of emotion regulation and mindfulness in teachers may be related to student or classroom outcomes. Still, no research has examined how the relationship between such internal experiences and classroom management outcomes may also be related to teachers’ attachment insecurity. This study examined the interrelationships between teachers’ attachment insecurity, mindfulness tendencies, emotion regulation abilities, and classroom management efficacy as indexed by students’ classroom behavior and teachers’ response effectiveness. Teachers’ attachment insecurity was evaluated using the global ECRS-SF, which measures both attachment anxiety and avoidance. The present study includes a convenient sample of 357 American elementary school teachers who responded to a survey regarding their classroom management efficacy, attachment in/security, dispositional mindfulness, emotion regulation strategies, and difficulties in emotion regulation, primarily assessed via pre-existing instruments. Good construct validity was demonstrated for all scales used in the survey. Sample demographics, including gender (94% female), race (92% White), age (M = 41.9 yrs.), years of teaching experience (M = 15.2 yrs.), and education level were similar to the population from which it was drawn, (i.e., American elementary school teachers). However, white women were slightly overrepresented in our sample. Correlational results suggest that teacher attachment insecurity is associated with poorer classroom management efficacy as indexed by students’ disruptive behavior and teachers’ response effectiveness. Attachment anxiety was a much stronger predictor of adverse student behaviors and ineffective teacher responses to adverse behaviors than attachment avoidance. Mindfulness, emotion regulation abilities, and years of teaching experience predicted positive classroom management outcomes. Attachment insecurity and mindfulness were more strongly related to frequent adverse student behaviors, while emotion regulation abilities were more strongly related to teachers’ response effectiveness. The teaching experience was negatively related to attachment insecurity and positively related to mindfulness and emotion regulation abilities. Although the data were cross-sectional, path analyses revealed that attachment insecurity is directly related to classroom management efficacy. Through two routes, this relationship is further mediated by emotion regulation and mindfulness in teachers. The first route of indirect effect suggests double mediation by teacher’s emotion regulation and then teacher mindfulness in the relationship between teacher attachment insecurity and classroom management efficacy. The second indirect effect suggests mindfulness directly mediated the relationship between attachment insecurity and classroom management efficacy, resulting in improved model fit statistics. However, this indirect effect is much smaller than the double mediation route through emotion regulation and mindfulness in teachers. Given the significant predication of teacher attachment insecurity, mindfulness, and emotion regulation on teachers’ classroom management efficacy both directly and indirectly, the authors recommend improving teachers’ classroom management efficacy via a three-pronged approach aiming at enhancing teachers’ secure attachment and supporting their learning adaptive emotion regulation strategies and mindfulness techniques.

Keywords: Classroom management efficacy, student behavior, teacher attachment, teacher emotion regulation, teacher mindfulness

Procedia PDF Downloads 85
237 Risking Injury: Exploring the Relationship between Risk Propensity and Injuries among an Australian Rules Football Team

Authors: Sarah A. Harris, Fleur L. McIntyre, Paola T. Chivers, Benjamin G. Piggott, Fiona H. Farringdon

Abstract:

Australian Rules Football (ARF) is an invasion based, contact field sport with over one million participants. The contact nature of the game increases exposure to all injuries, including head trauma. Evidence suggests that both concussion and sub-concussive traumas such as head knocks may damage the brain, in particular the prefrontal cortex. The prefrontal cortex may not reach full maturity until a person is in their early twenties with males taking longer to mature than females. Repeated trauma to the pre-frontal cortex during maturation may lead to negative social, cognitive and emotional effects. It is also during this period that males exhibit high levels of risk taking behaviours. Risk propensity and the incidence of injury is an unexplored area of research. Little research has considered if the level of player’s (especially younger players) risk propensity in everyday life places them at an increased risk of injury. Hence the current study, investigated if a relationship exists between risk propensity and self-reported injuries including diagnosed concussion and head knocks, among male ARF players aged 18 to 31 years. Method: The study was conducted over 22 weeks with one West Australian Football League (WAFL) club during the 2015 competition. Pre-season risk propensity was measured using the 7-item self-report Risk Propensity Scale. Possible scores ranged from 9 to 63, with higher scores indicating higher risk propensity. Players reported their self-perceived injuries (concussion, head knocks, upper body and lower body injuries) fortnightly using the WAFL Injury Report Survey (WIRS). A unique ID code was used to ensure player anonymity, which also enabled linkage of survey responses and injury data tracking over the season. A General Linear Model (GLM) was used to analyse whether there was a relationship between risk propensity score and total number of injuries for each injury type. Results: Seventy one players (N=71) with an age range of 18.40 to 30.48 years and a mean age of 21.92 years (±2.96 years) participated in the study. Player’s mean risk propensity score was 32.73, SD ±8.38. Four hundred and ninety five (495) injuries were reported. The most frequently reported injury was head knocks representing 39.19% of total reported injuries. The GLM identified a significant relationship between risk propensity and head knocks (F=4.17, p=.046). No other injury types were significantly related to risk propensity. Discussion: A positive relationship between risk propensity and head trauma in contact sports (specifically WAFL) was discovered. Assessing player’s risk propensity therefore, may identify those more at risk of head injuries. Potentially leading to greater monitoring and education of these players throughout the season, regarding self-identification of head knocks and symptoms that may indicate trauma to the brain. This is important because many players involved in WAFL are in their late teens or early 20’s hence, may be at greater risk of negative outcomes if they experience repeated head trauma. Continued education and research into the risks associated with head injuries has the potential to improve player well-being.

Keywords: football, head injuries, injury identification, risk

Procedia PDF Downloads 333
236 Optimal-Based Structural Vibration Attenuation Using Nonlinear Tuned Vibration Absorbers

Authors: Pawel Martynowicz

Abstract:

Vibrations are a crucial problem for slender structures such as towers, masts, chimneys, wind turbines, bridges, high buildings, etc., that is why most of them are equipped with vibration attenuation or fatigue reduction solutions. In this work, a slender structure (i.e., wind turbine tower-nacelle model) equipped with nonlinear, semiactive tuned vibration absorber(s) is analyzed. For this study purposes, magnetorheological (MR) dampers are used as semiactive actuators. Several optimal-based approaches to structural vibration attenuation are investigated against the standard ‘ground-hook’ law and passive tuned vibration absorber(s) implementations. The common approach to optimal control of nonlinear systems is offline computation of the optimal solution, however, so determined open loop control suffers from lack of robustness to uncertainties (e.g., unmodelled dynamics, perturbations of external forces or initial conditions), and thus perturbation control techniques are often used. However, proper linearization may be an issue for highly nonlinear systems with implicit relations between state, co-state, and control. The main contribution of the author is the development as well as numerical and experimental verification of the Pontriagin maximum-principle-based vibration control concepts that produce directly actuator control input (not the demanded force), thus force tracking algorithm that results in control inaccuracy is entirely omitted. These concepts, including one-step optimal control, quasi-optimal control, and optimal-based modified ‘ground-hook’ law, can be directly implemented in online and real-time feedback control for periodic (or semi-periodic) disturbances with invariant or time-varying parameters, as well as for non-periodic, transient or random disturbances, what is a limitation for some other known solutions. No offline calculation, excitations/disturbances assumption or vibration frequency determination is necessary, moreover, all of the nonlinear actuator (MR damper) force constraints, i.e., no active forces, lower and upper saturation limits, hysteresis-type dynamics, etc., are embedded in the control technique, thus the solution is optimal or suboptimal for the assumed actuator, respecting its limitations. Depending on the selected method variant, a moderate or decisive reduction in the computational load is possible compared to other methods of nonlinear optimal control, while assuring the quality and robustness of the vibration reduction system, as well as considering multi-pronged operational aspects, such as possible minimization of the amplitude of the deflection and acceleration of the vibrating structure, its potential and/or kinetic energy, required actuator force, control input (e.g. electric current in the MR damper coil) and/or stroke amplitude. The developed solutions are characterized by high vibration reduction efficiency – the obtained maximum values of the dynamic amplification factor are close to 2.0, while for the best of the passive systems, these values exceed 3.5.

Keywords: magnetorheological damper, nonlinear tuned vibration absorber, optimal control, real-time structural vibration attenuation, wind turbines

Procedia PDF Downloads 124
235 Empirical Study of Innovative Development of Shenzhen Creative Industries Based on Triple Helix Theory

Authors: Yi Wang, Greg Hearn, Terry Flew

Abstract:

In order to understand how cultural innovation occurs, this paper explores the interaction in Shenzhen of China between universities, creative industries, and government in creative economic using the Triple Helix framework. During the past two decades, Triple Helix has been recognized as a new theory of innovation to inform and guide policy-making in national and regional development. Universities and governments around the world, especially in developing countries, have taken actions to strengthen connections with creative industries to develop regional economies. To date research based on the Triple Helix model has focused primarily on Science and Technology collaborations, largely ignoring other fields. Hence, there is an opportunity for work to be done in seeking to better understand how the Triple Helix framework might apply in the field of creative industries and what knowledge might be gleaned from such an undertaking. Since the late 1990s, the concept of ‘creative industries’ has been introduced as policy and academic discourse. The development of creative industries policy by city agencies has improved city wealth creation and economic capital. It claims to generate a ‘new economy’ of enterprise dynamics and activities for urban renewal through the arts and digital media, via knowledge transfer in knowledge-based economies. Creative industries also involve commercial inputs to the creative economy, to dynamically reshape the city into an innovative culture. In particular, this paper will concentrate on creative spaces (incubators, digital tech parks, maker spaces, art hubs) where academic, industry and government interact. China has sought to enhance the brand of their manufacturing industry in cultural policy. It aims to transfer the image of ‘Made in China’ to ‘Created in China’ as well as to give Chinese brands more international competitiveness in a global economy. Shenzhen is a notable example in China as an international knowledge-based city following this path. In 2009, the Shenzhen Municipal Government proposed the city slogan ‘Build a Leading Cultural City”’ to show the ambition of government’s strong will to develop Shenzhen’s cultural capacity and creativity. The vision of Shenzhen is to become a cultural innovation center, a regional cultural center and an international cultural city. However, there has been a lack of attention to the triple helix interactions in the creative industries in China. In particular, there is limited knowledge about how interactions in creative spaces co-location within triple helix networks significantly influence city based innovation. That is, the roles of participating institutions need to be better understood. Thus, this paper discusses the interplay between university, creative industries and government in Shenzhen. Secondary analysis and documentary analysis will be used as methods in an effort to practically ground and illustrate this theoretical framework. Furthermore, this paper explores how are creative spaces being used to implement Triple Helix in creative industries. In particular, the new combination of resources generated from the synthesized consolidation and interactions through the institutions. This study will thus provide an innovative lens to understand the components, relationships and functions that exist within creative spaces by applying Triple Helix framework to the creative industries.

Keywords: cultural policy, creative industries, creative city, triple Helix

Procedia PDF Downloads 206
234 Analyzing the Investment Decision and Financing Method of the French Small and Medium-Sized Enterprises

Authors: Eliane Abdo, Olivier Colot

Abstract:

SMEs are always considered as a national priority due to their contribution to job creation, innovation and growth. Once the start-up phase is crossed with encouraging results, the company enters the phase of growth. In order to improve its competitiveness, maintain and increase its market share, the company is in the necessity even the obligation to develop its tangible and intangible investments. SMEs are generally closed companies with special and critical financial situation, limited resources and difficulty to access the capital markets; their shareholders are always living in a conflict between their independence and their need to increase capital that leads to the entry of new shareholder. The capital structure was always considered the core of research in corporate finance; moreover, the financial crisis and its repercussions on the credit’s availability, especially for SMEs make SME financing a hot topic. On the other hand, financial theories do not provide answers to capital structure’s questions; they offer tools and mode of financing that are more accessible to larger companies. Yet, SME’s capital structure can’t be independent of their governance structure. The classic financial theory supposes independence between the investment decision and the financing decision. Thus, investment determines the volume of funding, but not the split between internal or external funds. In this context, we find interesting to study the hypothesis that SMEs respond positively to the financial theories applied to large firms and to check if they are constrained by conventional solutions used by large companies. In this context, this research focuses on the analysis of the resource’s structure of SME in parallel with their investments’ structure, in order to highlight a link between their assets and liabilities structure. We founded our conceptual model based on two main theoretical frameworks: the Pecking order theory, and the Trade Off theory taking into consideration the SME’s characteristics. Our data were generated from DIANE database. Five hypotheses were tested via a panel regression to understand the type of dependence between the financing methods of 3,244 French SMEs and the development of their investment over a period of 10 years (2007-2016). The results show dependence between equity and internal financing in case of intangible investments development. Moreover, this type of business is constraint to financial debts since the guarantees provided are not sufficient to meet the banks' requirements. However, for tangible investments development, SMEs count sequentially on internal financing, bank borrowing, and new shares issuance or hybrid financing. This is compliant to the Pecking Order Theory. We, therefore, conclude that unlisted SMEs incur more financial debts to finance their tangible investments more than their intangible. However, they always prefer internal financing as a first choice. This seems to be confirmed by the assumption that the profitability of the company is negatively related to the increase of the financial debt. Thus, the Pecking Order Theory predictions seem to be the most plausible. Consequently, SMEs primarily rely on self-financing and then go, into debt as a priority to finance their financial deficit.

Keywords: capital structure, investments, life cycle, pecking order theory, trade off theory

Procedia PDF Downloads 112
233 Concentration of Droplets in a Transient Gas Flow

Authors: Timur S. Zaripov, Artur K. Gilfanov, Sergei S. Sazhin, Steven M. Begg, Morgan R. Heikal

Abstract:

The calculation of the concentration of inertial droplets in complex flows is encountered in the modelling of numerous engineering and environmental phenomena; for example, fuel droplets in internal combustion engines and airborne pollutant particles. The results of recent research, focused on the development of methods for calculating concentration and their implementation in the commercial CFD code, ANSYS Fluent, is presented here. The study is motivated by the investigation of the mixture preparation processes in internal combustion engines with direct injection of fuel sprays. Two methods are used in our analysis; the Fully Lagrangian method (also known as the Osiptsov method) and the Eulerian approach. The Osiptsov method predicts droplet concentrations along path lines by solving the equations for the components of the Jacobian of the Eulerian-Lagrangian transformation. This method significantly decreases the computational requirements as it does not require counting of large numbers of tracked droplets as in the case of the conventional Lagrangian approach. In the Eulerian approach the average droplet velocity is expressed as a function of the carrier phase velocity as an expansion over the droplet response time and transport equation can be solved in the Eulerian form. The advantage of the method is that droplet velocity can be found without solving additional partial differential equations for the droplet velocity field. The predictions from the two approaches were compared in the analysis of the problem of a dilute gas-droplet flow around an infinitely long, circular cylinder. The concentrations of inertial droplets, with Stokes numbers of 0.05, 0.1, 0.2, in steady-state and transient laminar flow conditions, were determined at various Reynolds numbers. In the steady-state case, flows with Reynolds numbers of 1, 10, and 100 were investigated. It has been shown that the results predicted using both methods are almost identical at small Reynolds and Stokes numbers. For larger values of these numbers (Stokes — 0.1, 0.2; Reynolds — 10, 100) the Eulerian approach predicted a wider spread in concentration in the perturbations caused by the cylinder that can be attributed to the averaged droplet velocity field. The transient droplet flow case was investigated for a Reynolds number of 200. Both methods predicted a high droplet concentration in the zones of high strain rate and low concentrations in zones of high vorticity. The maxima of droplet concentration predicted by the Osiptsov method was up to two orders of magnitude greater than that predicted by the Eulerian method; a significant variation for an approach widely used in engineering applications. Based on the results of these comparisons, the Osiptsov method has resulted in a more precise description of the local properties of the inertial droplet flow. The method has been applied to the analysis of the results of experimental observations of a liquid gasoline spray at representative fuel injection pressure conditions. The preliminary results show good qualitative agreement between the predictions of the model and experimental data.

Keywords: internal combustion engines, Eulerian approach, fully Lagrangian approach, gasoline fuel sprays, droplets and particle concentrations

Procedia PDF Downloads 257
232 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 225
231 Challenges and Recommendations for Medical Device Tracking and Traceability in Singapore: A Focus on Nursing Practices

Authors: Zhuang Yiwen

Abstract:

The paper examines the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. One of the major challenges identified is the lack of a standard coding system for medical devices, which makes it difficult to track them effectively. The paper suggests the use of the Unique Device Identifier (UDI) as a single standard for medical devices to improve tracking and reduce errors. The paper also explores the use of barcoding and image recognition to identify and document medical devices in nursing practices. In nursing practices, the use of barcodes for identifying medical devices is common. However, the information contained in these barcodes is often inconsistent, making it challenging to identify which segment contains the model identifier. Moreover, the use of barcodes may be improved with the use of UDI, but many subsidized accessories may still lack barcodes. The paper suggests that the readiness for UDI and barcode standardization requires standardized information, fields, and logic in electronic medical record (EMR), operating theatre (OT), and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. Nursing workflow and data flow also need to be taken into account. The paper also explores the use of image recognition, specifically the Tesseract OCR engine, to identify and document implants in public hospitals due to limitations in barcode scanning. The study found that the solution requires an implant information database and checking output against the database. The solution also requires customization of the algorithm, cropping out objects affecting text recognition, and applying adjustments. The solution requires additional resources and costs for a mobile/hardware device, which may pose space constraints and require maintenance of sterile criteria. The integration with EMR is also necessary, and the solution require changes in the user's workflow. The paper suggests that the long-term use of Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) as a supporting terminology to improve clinical documentation and data exchange in healthcare. SNOMED CT provides a standardized way of documenting and sharing clinical information with respect to procedure, patient and device documentation, which can facilitate interoperability and data exchange. In conclusion, the paper highlights the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. The paper suggests the use of UDI and barcode standardization to improve tracking and reduce errors. It also explores the use of image recognition to identify and document medical devices in nursing practices. The paper emphasizes the importance of standardized information, fields, and logic in EMR, OT, and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. These recommendations could help the Singapore healthcare system to improve tracking and traceability of medical devices and ultimately enhance patient safety.

Keywords: medical device tracking, unique device identifier, barcoding and image recognition, systematized nomenclature of medicine clinical terms

Procedia PDF Downloads 77
230 Index of Suitability for Culex pipiens sl. Mosquitoes in Portugal Mainland

Authors: Maria C. Proença, Maria T. Rebelo, Marília Antunes, Maria J. Alves, Hugo Osório, Sofia Cunha, REVIVE team

Abstract:

The environment of the mosquitoes complex Culex pipiens sl. in Portugal mainland is evaluated based in its abundance, using a data set georeferenced, collected during seven years (2006-2012) from May to October. The suitability of the different regions can be delineated using the relative abundance areas; the suitablility index is directly proportional to disease transmission risk and allows focusing mitigation measures in order to avoid outbreaks of vector-borne diseases. The interest in the Culex pipiens complex is justified by its medical importance: the females bite all warm-blooded vertebrates and are involved in the circulation of several arbovirus of concern to human health, like West Nile virus, iridoviruses, rheoviruses and parvoviruses. The abundance of Culex pipiens mosquitoes were documented systematically all over the territory by the local health services, in a long duration program running since 2006. The environmental factors used to characterize the vector habitat are land use/land cover, distance to cartographed water bodies, altitude and latitude. Focus will be on the mosquito females, which gonotrophic cycle mate-bloodmeal-oviposition is responsible for the virus transmission; its abundance is the key for the planning of non-aggressive prophylactic countermeasures that may eradicate the transmission risk and simultaneously avoid chemical ambient degradation. Meteorological parameters such as: air relative humidity, air temperature (minima, maxima and mean daily temperatures) and daily total rainfall were gathered from the weather stations network for the same dates and crossed with the standardized females’ abundance in a geographic information system (GIS). Mean capture and percentage of above average captures related to each variable are used as criteria to compute a threshold for each meteorological parameter; the difference of the mean capture above/below the threshold was statistically assessed. The meteorological parameters measured at the net of weather stations all over the country are averaged by month and interpolated to produce raster maps that can be segmented according to the meaningful thresholds for each parameter. The intersection of the maps of all the parameters obtained for each month show the evolution of the suitable meteorological conditions through the mosquito season, considered as May to October, although the first and last month are less relevant. In parallel, mean and above average captures were related to the physiographic parameters – the land use/land cover classes most relevant in each month, the altitudes preferred and the most frequent distance to water bodies, a factor closely related with the mosquito biology. The maps produced with these results were crossed with the meteorological maps previously segmented, in order to get an index of suitability for the complex Culex pipiens evaluated all over the country, and its evolution from the beginning to the end of the mosquitoes season.

Keywords: suitability index, Culex pipiens, habitat evolution, GIS model

Procedia PDF Downloads 576