Search results for: knowledge discovery techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13418

Search results for: knowledge discovery techniques

278 Combined Civilian and Military Disaster Response: A Critical Analysis of the 2010 Haiti Earthquake Relief Effort

Authors: Matthew Arnaouti, Michael Baird, Gabrielle Cahill, Tamara Worlton, Michelle Joseph

Abstract:

Introduction: Over ten years after the 7.0 magnitude Earthquake struck the capital of Haiti, impacting over three million people and leading to the deaths of over two hundred thousand, the multinational humanitarian response remains the largest disaster relief effort to date. This study critically evaluates the multi-sector and multinational disaster response to the Earthquake, looking at how the lessons learned from this analysis can be applied to future disaster response efforts. We put particular emphasis on assessing the interaction between civilian and military sectors during this humanitarian relief effort, with the hopes of highlighting how concrete guidelines are essential to improve future responses. Methods: An extensive scoping review of the relevant literature was conducted - where library scientists conducted reproducible, verified systematic searches of multiple databases. Grey literature and hand searches were utilised to identify additional unclassified military documents, for inclusion in the study. More than 100 documents were included for data extraction and analysis. Key domains were identified, these included: Humanitarian and Military Response, Communication, Coordination, Resources, Needs Assessment and Pre-Existing Policy. Corresponding information and lessons-learned pertaining to these domains was then extracted - detailing the barriers and facilitators to an effective response. Results: Multiple themes were noted which stratified all identified domains - including the lack of adequate pre-existing policy, as well as extensive ambiguity of actors’ roles. This ambiguity was continually influenced by the complex role the United States military played in the disaster response. At a deeper level, the effects of neo-colonialism and concern about infringements on Haitian sovereignty played a substantial role at all levels: setting the pre-existing conditions and determining the redevelopment efforts that followed. Furthermore, external factors significantly impacted the response, particularly the loss of life within the political and security sectors. This was compounded by the destruction of important infrastructure systems - particularly electricity supplies and telecommunication networks, as well as air and seaport capabilities. Conclusions: This study stands as one of the first and most comprehensive evaluations, systematically analysing the civilian and military response - including their collaborative efforts. This study offers vital information for improving future combined responses and provides a significant opportunity for advancing knowledge in disaster relief efforts - which remains a more pressing issue than ever. The categories and domains formulated serve to highlight interdependent factors that should be applied in future disaster responses, with significant potential to aid the effective performance of humanitarian actors. Further studies will be grounded in these findings, particularly the need for greater inclusion of the Haitian perspective in the literature, through additional qualitative research studies.

Keywords: civilian and military collaboration, combined response, disaster, disaster response, earthquake, Haiti, humanitarian response

Procedia PDF Downloads 99
277 Rethinking the Languages for Specific Purposes Syllabus in the 21st Century: Topic-Centered or Skills-Centered

Authors: A. Knezović

Abstract:

21st century has transformed the labor market landscape in a way of posing new and different demands on university graduates as well as university lecturers, which means that the knowledge and academic skills students acquire in the course of their studies should be applicable and transferable from the higher education context to their future professional careers. Given the context of the Languages for Specific Purposes (LSP) classroom, the teachers’ objective is not only to teach the language itself, but also to prepare students to use that language as a medium to develop generic skills and competences. These include media and information literacy, critical and creative thinking, problem-solving and analytical skills, effective written and oral communication, as well as collaborative work and social skills, all of which are necessary to make university graduates more competitive in everyday professional environments. On the other hand, due to limitations of time and large numbers of students in classes, the frequently topic-centered syllabus of LSP courses places considerable focus on acquiring the subject matter and specialist vocabulary instead of sufficient development of skills and competences required by students’ prospective employers. This paper intends to explore some of those issues as viewed both by LSP lecturers and by business professionals in their respective surveys. The surveys were conducted among more than 50 LSP lecturers at higher education institutions in Croatia, more than 40 HR professionals and more than 60 university graduates with degrees in economics and/or business working in management positions in mainly large and medium-sized companies in Croatia. Various elements of LSP course content have been taken into consideration in this research, including reading and listening comprehension of specialist texts, acquisition of specialist vocabulary and grammatical structures, as well as presentation and negotiation skills. The ability to hold meetings, conduct business correspondence, write reports, academic texts, case studies and take part in debates were also taken into consideration, as well as informal business communication, business etiquette and core courses delivered in a foreign language. The results of the surveys conducted among LSP lecturers will be analyzed with reference to what extent those elements are included in their courses and how consistently and thoroughly they are evaluated according to their course requirements. Their opinions will be compared to the results of the surveys conducted among professionals from a range of industries in Croatia so as to examine how useful and important they perceive the same elements of the LSP course content in their working environments. Such comparative analysis will thus show to what extent the syllabi of LSP courses meet the demands of the employment market when it comes to the students’ language skills and competences, as well as transferable skills. Finally, the findings will also be compared to the observations based on practical teaching experience and the relevant sources that have been used in this research. In conclusion, the ideas and observations in this paper are merely open-ended questions that do not have conclusive answers, but might prompt LSP lecturers to re-evaluate the content and objectives of their course syllabi.

Keywords: languages for specific purposes (LSP), language skills, topic-centred syllabus, transferable skills

Procedia PDF Downloads 286
276 Official Game Account Analysis: Factors Influence Users' Judgments in Limited-Word Posts

Authors: Shanhua Hu

Abstract:

Social media as a critical propagandizing form of film, video games, and digital products has received substantial research attention, but there exists several critical barriers such as: (1) few studies exploring the internal and external connections of a product as part of the multimodal context that gives rise to readability and commercial return; (2) the lack of study of multimodal analysis in product’s official account of game publishers and its impact on users’ behaviors including purchase intention, social media engagement, and playing time; (3) no standardized ecologically-valid, game type-varying data can be used to study the complexity of official account’s postings within a time period. This proposed research helps to tackle these limitations in order to develop a model of readability study that is more ecologically valid, robust, and thorough. To accomplish this objective, this paper provides a more diverse dataset comprising different visual elements and messages collected from the official Twitter accounts of the Top 20 best-selling games of 2021. Video game companies target potential users through social media, a popular approach is to set up an official account to maintain exposure. Typically, major game publishers would create an official account on Twitter months before the game's release date to update on the game's development, announce collaborations, and reveal spoilers. Analyses of tweets from those official Twitter accounts would assist publishers and marketers in identifying how to efficiently and precisely deploy advertising to increase game sales. The purpose of this research is to determine how official game accounts use Twitter to attract new customers, specifically which types of messages are most effective at increasing sales. The dataset includes the number of days until the actual release date on Twitter posts, the readability of the post (Flesch Reading Ease Score, FRES), the number of emojis used, the number of hashtags, the number of followers of the mentioned users, the categorization of the posts (i.e., spoilers, collaborations, promotions), and the number of video views. The timeline of Twitter postings from official accounts will be compared to the history of pre-orders and sales figures to determine the potential impact of social media posts. This study aims to determine how the above-mentioned characteristics of official accounts' Twitter postings influence the sales of the game and to examine the possible causes of this influence. The outcome will provide researchers with a list of potential aspects that could influence people's judgments in limited-word posts. With the increased average online time, users would adapt more quickly than before in online information exchange and readings, such as the word to use sentence length, and the use of emojis or hashtags. The study on the promotion of official game accounts will not only enable publishers to create more effective promotion techniques in the future but also provide ideas for future research on the influence of social media posts with a limited number of words on consumers' purchasing decisions. Future research can focus on more specific linguistic aspects, such as precise word choice in advertising.

Keywords: engagement, official account, promotion, twitter, video game

Procedia PDF Downloads 53
275 Assessing Measures and Caregiving Experiences of Thai Caregivers of Persons with Dementia

Authors: Piyaorn Wajanatinapart, Diane R. Lauver

Abstract:

The number of persons with dementia (PWD) has increased. Informal caregivers are the major providing care. They can have perceived gains and burdens. Caregivers who reported high in perceived gains may report low in burdens and better health. Gaps of caregiving literature were: no report psychometrics in a few studies and unclear definitions of gains; most studies with no theory-guided and conducting in Western countries; not fully described relationships among caregiving variables: motivations, satisfaction with psychological needs, social support, gains, burdens, and physical and psycho-emotional health. Those gaps were filled by assessing psychometric properties of selected measures, providing clearly definitions of gains, using self-determination theory (SDT) to guide the study, and developing the study in Thailand. The study purposes were to evaluate six measures for internal consistency reliability, content validity, and construct validity. This study also examined relationships of caregiving variables: motivations (controlled and autonomous motivations), satisfaction with psychological needs (autonomy, competency, and relatedness), perceived social support, perceived gains, perceived burdens, and physical and psycho-emotional health. This study was a cross-sectional and correlational descriptive design with two convenience samples. Sample 1 was five Thai experts to assess content validity of measures. Sample 2 was 146 Thai caregivers of PWD to assess construct validity, reliability, and relationships among caregiving variables. Experts rated questionnaires and sent them back via e-mail. Caregivers answered questionnaires at clinics of four Thai hospitals. Data analysis was used descriptive statistics and bivariate and multivariate analyses using the composite indicator structural equation model to control measurement errors. For study results, most caregivers were female (82%), middle age (M =51.1, SD =11.9), and daughters (57%). They provided care for 15 hours/day with 4.6 years. The content validity indices of items and scales were .80 or higher for clarity and relevance. Experts suggested item revisions. Cronbach’s alphas were .63 to .93 of ten subscales of four measures and .26 to .57 of three subscales. The gain scale was acceptable for construct validity. With controlling covariates, controlled motivations, the satisfaction with three subscales of psychological needs, and perceived social support had positive relationships with physical and psycho-emotional health. Both satisfaction with autonomy subscale and perceived social support had negative relationship with perceived burdens. The satisfaction with three subscales of psychological needs had positive relationships among them. Physical and psycho-emotional health subscales had positive relationships with each other. Furthermore, perceived burdens had negative relationships with physical and psycho-emotional health. This study was the first use SDT to describe relationships of caregiving variables in Thailand. Caregivers’ characteristics were consistent with literature. Four measures were valid and reliable except two measures. Breadth knowledge about relationships was provided. Interpretation of study results was cautious because of using same sample to evaluate psychometric properties of measures and relationships of caregiving variables. Researchers could use four measures for further caregiving studies. Using a theory would help describe concepts, propositions, and measures used. Researchers may examine the satisfaction with psychological needs as mediators. Future studies to collect data with caregivers in communities are needed.

Keywords: caregivers, caregiving, dementia, measures

Procedia PDF Downloads 280
274 Finite Element Analysis of Human Tarsals, Meta Tarsals and Phalanges for Predicting probable location of Fractures

Authors: Irfan Anjum Manarvi, Fawzi Aljassir

Abstract:

Human bones have been a keen area of research over a long time in the field of biomechanical engineering. Medical professionals, as well as engineering academics and researchers, have investigated various bones by using medical, mechanical, and materials approaches to discover the available body of knowledge. Their major focus has been to establish properties of these and ultimately develop processes and tools either to prevent fracture or recover its damage. Literature shows that mechanical professionals conducted a variety of tests for hardness, deformation, and strain field measurement to arrive at their findings. However, they considered these results accuracy to be insufficient due to various limitations of tools, test equipment, difficulties in the availability of human bones. They proposed the need for further studies to first overcome inaccuracies in measurement methods, testing machines, and experimental errors and then carry out experimental or theoretical studies. Finite Element analysis is a technique which was developed for the aerospace industry due to the complexity of design and materials. But over a period of time, it has found its applications in many other industries due to accuracy and flexibility in selection of materials and types of loading that could be theoretically applied to an object under study. In the past few decades, the field of biomechanical engineering has also started to see its applicability. However, the work done in the area of Tarsals, metatarsals and phalanges using this technique is very limited. Therefore, present research has been focused on using this technique for analysis of these critical bones of the human body. This technique requires a 3-dimensional geometric computer model of the object to be analyzed. In the present research, a 3d laser scanner was used for accurate geometric scans of individual tarsals, metatarsals, and phalanges from a typical human foot to make these computer geometric models. These were then imported into a Finite Element Analysis software and a length refining process was carried out prior to analysis to ensure the computer models were true representatives of actual bone. This was followed by analysis of each bone individually. A number of constraints and load conditions were applied to observe the stress and strain distributions in these bones under the conditions of compression and tensile loads or their combination. Results were collected for deformations in various axis, and stress and strain distributions were observed to identify critical locations where fracture could occur. A comparative analysis of failure properties of all the three types of bones was carried out to establish which of these could fail earlier which is presented in this research. Results of this investigation could be used for further experimental studies by the academics and researchers, as well as industrial engineers, for development of various foot protection devices or tools for surgical operations and recovery treatment of these bones. Researchers could build up on these models to carryout analysis of a complete human foot through Finite Element analysis under various loading conditions such as walking, marching, running, and landing after a jump etc.

Keywords: tarsals, metatarsals, phalanges, 3D scanning, finite element analysis

Procedia PDF Downloads 305
273 External Program Evaluation: Impacts and Changes on Government-Assisted Refugee Mothers

Authors: Akiko Ohta, Masahiro Minami, Yusra Qadir, Jennifer York

Abstract:

The Home Instruction for Parents of Preschool Youngsters (HIPPY) is a home instruction program for mothers of children 3 to 5 years old. Using role-play as a method of teaching, the participating mothers work with their home visitors and learn how to deliver the HIPPY curriculum to their children. Applying HIPPY, Reviving Hope and Home for High-risk Refugee Mothers Program (RHH) was created to provide more personalized peer support and to respond to ongoing settlement challenges for isolated and vulnerable Government Assisted Refugee (GAR) mothers. GARs often have greater needs and vulnerabilities than other refugee groups. While the support is available, they often face various challenges and barriers in starting their new lives in Canada, such as inadequate housing, low first-language literacy levels, low competency in English or French, and social isolation. The pilot project was operated by Mothers Matter Centre (MMC) from January 2019 to March 2021 in partnership with the Immigrant Services Society of BC (ISSofBC). The formative evaluation was conducted by a research team at Simon Fraser University. In order to provide more suitable support for GAR mothers, RHH intended to offer more flexibility in HIPPY delivery, supported by a home visitor, to meet the need of refugee mothers facing various conditions and challenges; to have a pool of financial resources to be used for the RHH families when necessitated during the program period; to have another designated staff member, called a community navigator, assigned to facilitate the support system for the RHH families in their settlement; to have a portable device available for each RHH mother to navigate settlement support resources; and to provide other variations of the HIPPY curriculum as an option for the RHH mothers, including a curriculum targeting pre-HIPPY age children. Reflections on each program component was collected from RHH mothers and staff members of MMC and ISSofBC, including frontline workers and management staff, through individual interviews and focus group discussions. Each of the RHH program components was analyzed and evaluated by applying Moore’s four domains framework to identify key information and generate new knowledge (data). To capture RHH mothers’ program experience more in depth based on their own reflections, the photovoice method was used. Some photos taken by the mothers will be shared to illustrate their RHH experience as part of their life stories. Over the period of the program, this evaluation observed how RHH mothers became more confident in various domains, such as communicating with others, taking public transportations alone, and teaching their own child(ren). One of the major factors behind the success was their home visitors’ flexibility and creativity to create a more meaningful and tailored approach for each mother, depending on her background and personal situation. The role of the community navigator was tested out and improved during the program period. The community navigators took the key role to assess the needs of the RHH families and connect them with community resources. Both the home visitors and community navigators were immigrant mothers themselves and owing to their dedicated care for the RHH mothers; they were able to gain trust and work closely and efficiently with RHH mothers.

Keywords: refugee mothers, settlement support, program evaluation, Canada

Procedia PDF Downloads 148
272 Investigation of Attitude of Production Workers towards Job Rotation in Automotive Industry against the Background of Demographic Change

Authors: Franciska Weise, Ralph Bruder

Abstract:

Due to the demographic change in Germany along with the declining birth rate and the increasing age of population, the share of older people in society is rising. This development is also reflected in the work force of German companies. Therefore companies should focus on improving ergonomics, especially in the area of age-related work design. Literature shows that studies on age-related work design have been carried out in the past, some of whose results have been put into practice. However, there is still a need for further research. One of the most important methods for taking into account the needs of an aging population is job rotation. This method aims at preventing or reducing health risks and inappropriate physical strain. It is conceived as a systematic change of workplaces within a group. Existing literature does not cover any methods for the investigation of the attitudes of employees towards job rotation. However, in order to evaluate job rotation, it is essential to have knowledge of the views of people towards rotation. In addition to an investigation of attitudes, the design of rotation plays a crucial role. The sequence of activities and the rotation frequency influence the worker and as well the work result. The evaluation of preliminary talks on the shop floor showed that team speakers and foremen share a common understanding of job rotation. In practice, different varieties of job rotation exist. One important aspect is the frequency of rotation. It is possible to rotate never, more than one time or even during every break, or more often than every break. It depends on the opportunity or possibility to rotate whenever workers want to rotate. From the preliminary talks some challenges can be derived. For example a rotation in the whole team is not possible, if a team member requires to be trained for a new task. In order to be able to determine the relation of the design and the attitude towards job rotation, a questionnaire is carried out in the vehicle manufacturing. The questionnaire will be employed to determine the different varieties of job rotation that exist in production, as well as the attitudes of workers towards those different frequencies of job rotation. In addition, younger and older employees will be compared with regard to their rotation frequency and their attitudes towards rotation. There are three kinds of age groups. Three questions are under examination. The first question is whether older employees rotate less frequently than younger employees. Also it is investigated to know whether the frequency of job rotation and the attitude towards the frequency of job rotation are interconnected. Moreover, the attitudes of the different age groups towards the frequency of rotation will be examined. Up to now 144 employees, all working in production, took part in the survey. 36.8 % were younger than thirty, 37.5 % were between thirty und forty-four and 25.7 % were above forty-five years old. The data shows no difference between the three age groups in relation to the frequency of job rotation (N=139, median=4, Chi²=.859, df=2, p=.651). Most employees rotate between six and seven workplaces per day. In addition there is a statistically significant correlation between the frequency of job rotation and the attitude towards the frequency (Spearman-Rho: 2-sided=.008, correlation coefficient=.223). Less than four workplaces per day are not enough for the employees. The third question, which differences can be found between older and younger people who rotate in a different way and with different attitudes towards job rotation, cannot be possible answered. Till now the data shows that younger people would like to rotate very often. Regarding to older people no correlation can be found with acceptable significance. The results of the survey will be used to improve the current practice of job rotation. In addition, the discussions during the survey are expected to help sensitize the employees with respect to rotation issues, and to contribute to optimizing rotation by means of qualification and an improved design of job rotation. Together with the employees and the results of the survey there must be found standards which show how to rotate in an ergonomic way while consider the attitude towards job rotation.

Keywords: job rotation, age-related work design, questionnaire, automotive industry

Procedia PDF Downloads 282
271 Valuing Academic Excellence in Higher Education: The Case of Establishing a Human Development Unit in a European Start-up University

Authors: Eleftheria Atta, Yianna Vovides, Marios Katsioloudes

Abstract:

In the fusion of neoliberalism and globalization, Higher Education (HE) is becoming increasingly complex. The changing patterns of the economy worldwide caused the development of high value-added economy HE has been viewed as a social investment, significant for the development of knowledge-based societies and economies. In order to contribute to economic competitiveness universities are required to produce local and employable workers in order to fit into the neoliberal economic environment. The emergence of neoliberal performativity, which measures outcomes, is a key aspect in a neoliberal era. It facilitates the redesign of institutions making organizations and individuals to think about themselves in relation to their performance. Performativity and performance management systems lead academics to become more effective, professionally advance, improve and become better than others and therefore act competitively. Besides the aforementioned complexities, universities also encounter the challenge of maintaining a set of values to guide an institution’s actions and which have always been highly respected in developing a HE institution. The formulation of a clear set of values also determines the institutional culture which will be maintained. It is evident that values create a significant framework for the workplace and may determine positive institutional results. Universities are required to engage in activities for capacity building which will improve their students’ competence as well as offer opportunities to administrative and academic staff to professionally develop in light of neoliberal performativity. Additionally, the University is now considered as an innovation ecosystem playing a significant role in providing education, research and innovation to help create solutions to meet social, environmental and economic challenges. Thus, Universities become central in orchestrating multi-actor innovation networks. This presentation will discuss the establishment of an institutional unit entitled ‘Human Development Unit’ (HDU) in a European start-up university. The activities of the HDU are envisioned as drivers for innovation that would enable the university as a whole to maintain its position in a fast-changing world and be ready to face adaptive challenges. In addition, the HDU provides its students, staff, and faculty with opportunities to advance their academic and professional development through engagement in programs that align with institutional values. It also serves as a connector with the broader community. The presentation will highlight the functions of three centers which the unit will coordinate namely, the Student Development Center (SDC), the Faculty & Staff Development Center (FSDC) and the Continuing Education Center (CEC). The presentation aligns with the aim of the conference as it welcomes presentations to discuss innovations and challenges encountered in HE. Particularly, this presentation seeks to discuss the establishment of an innovative unit at a start-up university which will contribute to creating an institutional culture shaped by the value of academic excellence for students as well as for staff, shaping and defining the functions and activities of the unit. The establishment of the proposed unit is crucial in a start-up university both to differentiate from other competitors but also to sustain its presence given the pressures in a neoliberal HE context.

Keywords: academic excellence, globalization, human development unit, neoliberalism

Procedia PDF Downloads 111
270 Effect of Rapeseed Press Cake on Extrusion System Parameters and Physical Pellet Quality of Fish Feed

Authors: Anna Martin, Raffael Osen

Abstract:

The demand for fish from aquaculture is constantly growing. Concurrently, due to a shortage of fishmeal caused by extensive overfishing, fishmeal substitution by plant proteins is getting increasingly important for the production of sustainable aquafeed. Several research studies evaluated the impact of plant protein meals, concentrates or isolates on fish health and fish feed quality. However, these protein raw materials often require elaborate and expensive manufacturing and their availability is limited. Rapeseed press cake (RPC) – a side product of de-oiling processes – exhibits a high potential as a plant-based fishmeal alternative in fish feed for carnivorous species due to its availability, low costs and protein content. In order to produce aquafeed with RPC, it is important to systematically assess i) inclusion levels of RPC with similar pellet qualities compared to fishmeal containing formulations and ii) how extrusion parameters can be adjusted to achieve targeted pellet qualities. However, the effect of RPC on extrusion system parameters and pellet quality has only scarcely been investigated. Therefore, the aim of this study was to evaluate the impact of feed formulation, extruder barrel temperature (90, 100, 110 °C) and screw speed (200, 300, 400 rpm) on extrusion system parameters and the physical properties of fish feed pellets. A co-rotating pilot-scale twin screw extruder was used to produce five iso-nitrogenous feed formulations: a fish meal based reference formulation including 16 g/100g fishmeal and four formulations in which fishmeal was substituted by RPC to 25, 50, 75 or 100 %. Extrusion system parameters, being product temperature, pressure at the die, specific mechanical energy (SME) and torque, were monitored while samples were taken. After drying, pellets were analyzed regarding to optical appearance, sectional and longitudinal expansion, sinking velocity, bulk density, water stability, durability and specific hardness. In our study, the addition of minor amounts of RPC already had high impact on pellet quality parameters, especially on expansion but only marginally affected extrusion system parameters. Increasing amounts of RPC reduced sectional expansion, sinking velocity, bulk density and specific hardness and increased longitudinal expansion compared to a reference formulation without RPC. Water stability and durability were almost not affected by RPC addition. Moreover, pellets with rapeseed components showed a more coarse structure than pellets containing only fishmeal. When the adjustment of barrel temperature and screw speed was investigated, it could be seen that the increase of extruder barrel temperature led to a slight decrease of SME and die pressure and an increased sectional expansion of the reference pellets but did almost not affect rapeseed containing fish feed pellets. Also changes in screw speed had little effects on the physical properties of pellets however with raised screw speed the SME and the product temperature increased. In summary, a one-to-one substitution of fishmeal with RPC without the adjustment of extrusion process parameters does not result in fish feed of a designated quality. Therefore, a deeper knowledge of raw materials and their behavior under thermal and mechanical stresses as applied during extrusion is required.

Keywords: extrusion, fish feed, press cake, rapeseed

Procedia PDF Downloads 120
269 Simulation Research of Innovative Ignition System of ASz62IR Radial Aircraft Engine

Authors: Miroslaw Wendeker, Piotr Kacejko, Mariusz Duk, Pawel Karpinski

Abstract:

The research in the field of aircraft internal combustion engines is currently driven by the needs of decreasing fuel consumption and CO2 emissions, while fulfilling the level of safety. Currently, reciprocating aircraft engines are found in sports, emergency, agricultural and recreation aviation. Technically, they are most at a pre-war knowledge of the theory of operation, design and manufacturing technology, especially if compared to that high level of development of automotive engines. Typically, these engines are driven by carburetors of a quite primitive construction. At present, due to environmental requirements and dealing with a climate change, it is beneficial to develop aircraft piston engines and adopt the achievements of automotive engineering such as computer-controlled low-pressure injection, electronic ignition control and biofuels. The paper describes simulation research of the innovative power and control systems for the aircraft radial engine of high power. Installing an electronic ignition system in the radial aircraft engine is a fundamental innovative idea of this solution. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. In this framework, this research work focuses on describing a methodology for optimizing the electronically controlled ignition system. This attempt can reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. New, redundant elements of the control system can improve the safety of aircraft. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. The simulation research aimed to determine the vulnerability of the values measured (they were planned as the quantities measured by the measurement systems) to determining the optimal ignition angle (the angle of maximum torque at a given operating point). The described results covered: a) research in steady states; b) velocity ranging from 1500 to 2200 rpm (every 100 rpm); c) loading ranging from propeller power to maximum power; d) altitude ranging according to the International Standard Atmosphere from 0 to 8000 m (every 1000 m); e) fuel: automotive gasoline ES95. The three models of different types of ignition coil (different energy discharge) were studied. The analysis aimed at the optimization of the design of the innovative ignition system for an aircraft engine. The optimization involved: a) the optimization of the measurement systems; b) the optimization of actuator systems. The studies enabled the research on the vulnerability of the signals to the control of the ignition timing. Accordingly, the number and type of sensors were determined for the ignition system to achieve its optimal performance. The results confirmed the limited benefits, in terms of fuel consumption. Thus, including spark management in the optimization is mandatory to significantly decrease the fuel consumption. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: piston engine, radial engine, ignition system, CFD model, engine optimization

Procedia PDF Downloads 362
268 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning

Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar

Abstract:

Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.

Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence

Procedia PDF Downloads 35
267 Stakeholder Mapping and Requirements Identification for Improving Traceability in the Halal Food Supply Chain

Authors: Laila A. H. F. Dashti, Tom Jackson, Andrew West, Lisa Jackson

Abstract:

Traceability systems are important in the agri-food and halal food sectors for monitoring ingredient movements, tracking sources, and ensuring food integrity. However, designing a traceability system for the halal food supply chain is challenging due to diverse stakeholder requirements and complex needs. Existing literature on stakeholder mapping and identifying requirements for halal food supply chains is limited. To address this gap, a pilot study was conducted to identify the objectives, requirements, and recommendations of stakeholders in the Kuwaiti halal food industry. The study collected data through semi-structured interviews with an international halal food manufacturer based in Kuwait. The aim was to gain a deep understanding of stakeholders' objectives, requirements, processes, and concerns related to the design of a traceability system in the country's halal food sector. Traceability systems are being developed and tested in the agri-food and halal food sectors due to their ability to monitor ingredient movements, track sources, and detect potential issues related to food integrity. Designing a traceability system for the halal food supply chain poses significant challenges due to diverse stakeholder requirements and the complexity of their needs (including varying food ingredients, different sources, destinations, supplier processes, certifications, etc.). Achieving a halal food traceability solution tailored to stakeholders' requirements within the supply chain necessitates prior knowledge of these needs. Although attempts have been made to address design-related issues in traceability systems, literature on stakeholder mapping and identification of requirements specific to halal food supply chains is scarce. Thus, this pilot study aims to identify the objectives, requirements, and recommendations of stakeholders in the halal food industry. The paper presents insights gained from the pilot study, which utilized semi-structured interviews to collect data from a Kuwait-based international halal food manufacturer. The objective was to gain an in-depth understanding of stakeholders' objectives, requirements, processes, and concerns pertaining to the design of a traceability system in Kuwait's halal food sector. The stakeholder mapping results revealed that government entities, food manufacturers, retailers, and suppliers are key stakeholders in Kuwait's halal food supply chain. Lessons learned from this pilot study regarding requirement capture for traceability systems include the need to streamline communication, focus on communication at each level of the supply chain, leverage innovative technologies to enhance process structuring and operations and reduce halal certification costs. The findings also emphasized the limitations of existing traceability solutions, such as limited cooperation and collaboration among stakeholders, high costs of implementing traceability systems without government support, lack of clarity regarding product routes, and disrupted communication channels between stakeholders. These findings contribute to a broader research program aimed at developing a stakeholder requirements framework that utilizes "business process modelling" to establish a unified model for traceable stakeholder requirements.

Keywords: supply chain, traceability system, halal food, stakeholders’ requirements

Procedia PDF Downloads 79
266 Interactions between Sodium Aerosols and Fission Products: A Theoretical Chemistry and Experimental Approach

Authors: Ankita Jadon, Sidi Souvi, Nathalie Girault, Denis Petitprez

Abstract:

Safety requirements for Generation IV nuclear reactor designs, especially the new generation sodium-cooled fast reactors (SFR) require a risk-informed approach to model severe accidents (SA) and their consequences in case of outside release. In SFRs, aerosols are produced during a core disruptive accident when primary system sodium is ejected into the containment and burn in contact with the air; producing sodium aerosols. One of the key aspects of safety evaluation is the in-containment sodium aerosol behavior and their interaction with fission products. The study of the effects of sodium fires is essential for safety evaluation as the fire can both thermally damage the containment vessel and cause an overpressurization risk. Besides, during the fire, airborne fission product first dissolved in the primary sodium can be aerosolized or, as it can be the case for fission products, released under the gaseous form. The objective of this work is to study the interactions between sodium aerosols and fission products (Iodine, toxic and volatile, being the primary concern). Sodium fires resulting from an SA would produce aerosols consisting of sodium peroxides, hydroxides, carbonates, and bicarbonates. In addition to being toxic (in oxide form), this aerosol will then become radioactive. If such aerosols are leaked into the environment, they can pose a danger to the ecosystem. Depending on the chemical affinity of these chemical forms with fission products, the radiological consequences of an SA leading to containment leak tightness loss will also be affected. This work is split into two phases. Firstly, a method to theoretically understand the kinetics and thermodynamics of the heterogeneous reaction between sodium aerosols and fission products: I2 and HI are proposed. Ab-initio, density functional theory (DFT) calculations using Vienna ab-initio simulation package are carried out to develop an understanding of the surfaces of sodium carbonate (Na2CO3) aerosols and hence provide insight on its affinity towards iodine species. A comprehensive study of I2 and HI adsorption, as well as bicarbonate formation on the calculated lowest energy surface of Na2CO3, was performed which provided adsorption energies and description of the optimized configuration of adsorbate on the stable surface. Secondly, the heterogeneous reaction between (I2)g and Na2CO3 aerosols were investigated experimentally. To study this, (I2)g was generated by heating a permeation tube containing solid I2, and, passing it through a reaction chamber containing Na2CO3 aerosol deposit. The concentration of iodine was then measured at the exit of the reaction chamber. Preliminary observations indicate that there is an effective uptake of (I2)g on Na2CO3 surface, as suggested by our theoretical chemistry calculations. This work is the first step in addressing the gaps in knowledge of in-containment and atmospheric source term which are essential aspects of safety evaluation of SFR SA. In particular, this study is aimed to determine and characterize the radiological and chemical source term. These results will then provide useful insights for the developments of new models to be implemented in integrated computer simulation tool to analyze and evaluate SFR safety designs.

Keywords: iodine adsorption, sodium aerosols, sodium cooled reactor, DFT calculations, sodium carbonate

Procedia PDF Downloads 193
265 Resilience-Based Emergency Bridge Inspection Routing and Repair Scheduling under Uncertainty

Authors: Zhenyu Zhang, Hsi-Hsien Wei

Abstract:

Highway network systems play a vital role in disaster response for disaster-damaged areas. Damaged bridges in such network systems can impede disaster response by disrupting transportation of rescue teams or humanitarian supplies. Therefore, emergency inspection and repair of bridges to quickly collect damage information of bridges and recover the functionality of highway networks is of paramount importance to disaster response. A widely used measure of a network’s capability to recover from disasters is resilience. To enhance highway network resilience, plenty of studies have developed various repair scheduling methods for the prioritization of bridge-repair tasks. These methods assume that repair activities are performed after the damage to a highway network is fully understood via inspection, although inspecting all bridges in a regional highway network may take days, leading to the significant delay in repairing bridges. In reality, emergency repair activities can be commenced as soon as the damage data of some bridges that are crucial to emergency response are obtained. Given that emergency bridge inspection and repair (EBIR) activities are executed simultaneously in the response phase, the real-time interactions between these activities can occur – the blockage of highways due to repair activities can affect inspection routes which in turn have an impact on emergency repair scheduling by providing real-time information on bridge damages. However, the impact of such interactions on the optimal emergency inspection routes (EIR) and emergency repair schedules (ERS) has not been discussed in prior studies. To overcome the aforementioned deficiencies, this study develops a routing and scheduling model for EBIR while accounting for real-time inspection-repair interactions to maximize highway network resilience. A stochastic, time-dependent integer program is proposed for the complex and real-time interacting EBIR problem given multiple inspection and repair teams at locations as set post-disaster. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. Computational tests are performed using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that the simultaneous implementation of bridge inspection and repair activities can significantly improve the highway network resilience. Moreover, the deployment of inspection and repair teams should match each other, and the network resilience will not be improved once the unilateral increase in inspection teams or repair teams exceeds a certain level. This study contributes to both knowledge and practice. First, the developed mathematical model makes it possible for capturing the impact of real-time inspection-repair interactions on inspection routing and repair scheduling and efficiently deriving optimal EIR and ERS on a large and complex highway network. Moreover, this study contributes to the organizational dimension of highway network resilience by providing optimal strategies for highway bridge management. With the decision support tool, disaster managers are able to identify the most critical bridges for disaster management and make decisions on proper inspection and repair strategies to improve highway network resilience.

Keywords: disaster management, emergency bridge inspection and repair, highway network, resilience, uncertainty

Procedia PDF Downloads 90
264 Teaching Linguistic Humour Research Theories: Egyptian Higher Education EFL Literature Classes

Authors: O. F. Elkommos

Abstract:

“Humour studies” is an interdisciplinary research area that is relatively recent. It interests researchers from the disciplines of psychology, sociology, medicine, nursing, in the work place, gender studies, among others, and certainly teaching, language learning, linguistics, and literature. Linguistic theories of humour research are numerous; some of which are of interest to the present study. In spite of the fact that humour courses are now taught in universities around the world in the Egyptian context it is not included. The purpose of the present study is two-fold: to review the state of arts and to show how linguistic theories of humour can be possibly used as an art and craft of teaching and of learning in EFL literature classes. In the present study linguistic theories of humour were applied to selected literary texts to interpret humour as an intrinsic artistic communicative competence challenge. Humour in the area of linguistics was seen as a fifth component of communicative competence of the second language leaner. In literature it was studied as satire, irony, wit, or comedy. Linguistic theories of humour now describe its linguistic structure, mechanism, function, and linguistic deviance. Semantic Script Theory of Verbal Humor (SSTH), General Theory of Verbal Humor (GTVH), Audience Based Theory of Humor (ABTH), and their extensions and subcategories as well as the pragmatic perspective were employed in the analyses. This research analysed the linguistic semantic structure of humour, its mechanism, and how the audience reader (teacher or learner) becomes an interactive interpreter of the humour. This promotes humour competence together with the linguistic, social, cultural, and discourse communicative competence. Studying humour as part of the literary texts and the perception of its function in the work also brings its positive association in class for educational purposes. Humour is by default a provoking/laughter-generated device. Incongruity recognition, perception and resolving it, is a cognitive mastery. This cognitive process involves a humour experience that lightens up the classroom and the mind. It establishes connections necessary for the learning process. In this context the study examined selected narratives to exemplify the application of the theories. It is, therefore, recommended that the theories would be taught and applied to literary texts for a better understanding of the language. Students will then develop their language competence. Teachers in EFL/ESL classes will teach the theories, assist students apply them and interpret text and in the process will also use humour. This is thus easing students' acquisition of the second language, making the classroom an enjoyable, cheerful, self-assuring, and self-illuminating experience for both themselves and their students. It is further recommended that courses of humour research studies should become an integral part of higher education curricula in Egypt.

Keywords: ABTH, deviance, disjuncture, episodic, GTVH, humour competence, humour comprehension, humour in the classroom, humour in the literary texts, humour research linguistic theories, incongruity-resolution, isotopy-disjunction, jab line, longer text joke, narrative story line (macro-micro), punch line, six knowledge resource, SSTH, stacks, strands, teaching linguistics, teaching literature, TEFL, TESL

Procedia PDF Downloads 273
263 Health Inequalities in the Global South: Identification of Poor People with Disabilities in Cambodia to Generate Access to Healthcare

Authors: Jamie Lee Harder

Abstract:

In the context of rapidly changing social and economic circumstances in the developing world, this paper analyses access to public healthcare for poor people with disabilities in Cambodia. Like other countries of South East Asia, Cambodia is developing at rapid pace. The historical past of Cambodia, however, has set former social policy structures to zero. This past forces Cambodia and its citizens to implement new public health policies to align with the needs of social care, healthcare, and urban planning. In this context, the role of people with disabilities (PwDs) is crucial as new developments should and can take into consideration their specific needs from the beginning onwards. This paper is based on qualitative research with expert interviews and focus group discussions in Cambodia. During the field work it became clear that the identification tool for the poorest households (HHs) does not count disability as a financial risk to fall into poverty neither when becoming sick nor because of higher health expenditures and/or lower income because of the disability. The social risk group of poor PwDs faces several barriers in accessing public healthcare. The urbanization, the socio-economic health status, and opportunities for education; all influence social status and have an impact on the health situation of these individuals. Cambodia has various difficulties with providing access to people with disabilities, mostly due to barriers regarding finances, geography, quality of care, poor knowledge about their rights and negative social and cultural beliefs. Shortened budgets and the lack of prioritizations lead to the need for reorientation of local communities, international and national non-governmental organizations and social policy. The poorest HHs are identified with a questionnaire, the IDPoor program, for which the Ministry of Planning is responsible. The identified HHs receive an ‘Equity Card’ which provides access free of charge to public healthcare centers and hospitals among other benefits. The dataset usually does not include information about the disability status. Four focus group discussions (FGD) with 28 participants showed various barriers in accessing public healthcare. These barriers go far beyond a missing ramp to access the healthcare center. The contents of the FGDs were ratified and repeated during the expert interviews with the local Ministries, NGOs, international organizations and private persons working in the field. The participants of the FGDs faced and continue to face high discrimination, low capacity to work and earn an own income, dependency on others and less social competence in their lives. When discussing their health situation, we identified, a huge difference between those who are identified and hold an Equity Card and those who do not. Participants reported high costs without IDPoor identification, positive experiences when going to the health center in terms of attitude and treatment, low satisfaction with specific capacities for treatments, negative rumors, and discrimination with the consequence of fear to seek treatment in many cases. The problem of accessing public healthcare by risk groups can be adapted to situations in other countries.

Keywords: access, disability, health, inequality, Cambodia

Procedia PDF Downloads 126
262 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights

Authors: Olga Kokoulina

Abstract:

Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.

Keywords: algorithms, public interest, trade secrets, transparency

Procedia PDF Downloads 103
261 On the Possibility of Real Time Characterisation of Ambient Toxicity Using Multi-Wavelength Photoacoustic Instrument

Authors: Tibor Ajtai, Máté Pintér, Noémi Utry, Gergely Kiss-Albert, Andrea Palágyi, László Manczinger, Csaba Vágvölgyi, Gábor Szabó, Zoltán Bozóki

Abstract:

According to the best knowledge of the authors, here we experimentally demonstrate first, a quantified correlation between the real-time measured optical feature of the ambient and the off-line measured toxicity data. Finally, using these correlations we are presenting a novel methodology for real time characterisation of ambient toxicity based on the multi wavelength aerosol phase photoacoustic measurement. Ambient carbonaceous particulate matter is one of the most intensively studied atmospheric constituent in climate science nowadays. Beyond their climatic impact, atmospheric soot also plays an important role as an air pollutant that harms human health. Moreover, according to the latest scientific assessments ambient soot is the second most important anthropogenic emission source, while in health aspect its being one of the most harmful atmospheric constituents as well. Despite of its importance, generally accepted standard methodology for the quantitative determination of ambient toxicology is not available yet. Dominantly, ambient toxicology measurement is based on the posterior analysis of filter accumulated aerosol with limited time resolution. Most of the toxicological studies are based on operational definitions using different measurement protocols therefore the comprehensive analysis of the existing data set is really limited in many cases. The situation is further complicated by the fact that even during its relatively short residence time the physicochemical features of the aerosol can be masked significantly by the actual ambient factors. Therefore, decreasing the time resolution of the existing methodology and developing real-time methodology for air quality monitoring are really actual issues in the air pollution research. During the last decades many experimental studies have verified that there is a relation between the chemical composition and the absorption feature quantified by Absorption Angström Exponent (AAE) of the carbonaceous particulate matter. Although the scientific community are in the common platform that the PhotoAcoustic Spectroscopy (PAS) is the only methodology that can measure the light absorption by aerosol with accurate and reliable way so far, the multi-wavelength PAS which are able to selectively characterise the wavelength dependency of absorption has become only available in the last decade. In this study, the first results of the intensive measurement campaign focusing the physicochemical and toxicological characterisation of ambient particulate matter are presented. Here we demonstrate the complete microphysical characterisation of winter time urban ambient including optical absorption and scattering as well as size distribution using our recently developed state of the art multi-wavelength photoacoustic instrument (4λ-PAS), integrating nephelometer (Aurora 3000) as well as single mobility particle sizer and optical particle counter (SMPS+C). Beyond this on-line characterisation of the ambient, we also demonstrate the results of the eco-, cyto- and genotoxicity measurements of ambient aerosol based on the posterior analysis of filter accumulated aerosol with 6h time resolution. We demonstrate a diurnal variation of toxicities and AAE data deduced directly from the multi-wavelength absorption measurement results.

Keywords: photoacoustic spectroscopy, absorption Angström exponent, toxicity, Ames-test

Procedia PDF Downloads 279
260 Diamond-Like Carbon-Based Structures as Functional Layers on Shape-Memory Alloy for Orthopedic Applications

Authors: Piotr Jablonski, Krzysztof Mars, Wiktor Niemiec, Agnieszka Kyziol, Marek Hebda, Halina Krawiec, Karol Kyziol

Abstract:

NiTi alloys, possessing unique mechanical properties such as pseudoelasticity and shape memory effect (SME), are suitable for many applications, including implanthology and biomedical devices. Additionally, these alloys have similar values of elastic modulus to those of human bones, what is very important in orthopedics. Unfortunately, the environment of physiological fluids in vivo causes unfavorable release of Ni ions, which in turn may lead to metalosis as well as allergic reactions and toxic effects in the body. For these reasons, the surface properties of NiTi alloys should be improved to increase corrosion resistance, taking into account biological properties, i.e. excellent biocompatibility. The prospective in this respect are layers based on DLC (Diamond-Like Carbon) structures, which are an attractive solution for many applications in implanthology. These coatings (DLC), usually obtained by PVD (Physical Vapour Deposition) and PA CVD (Plasma Activated Chemical Vapour Deposition) methods, can be also modified by doping with other elements like silicon, nitrogen, oxygen, fluorine, titanium and silver. These methods, in combination with a suitably designed structure of the layers, allow the possibility co-decide about physicochemical and biological properties of modified surfaces. Mentioned techniques provide specific physicochemical properties of substrates surface in a single technological process. In this work, the following types of layers based on DLC structures (incl. Si-DLC or Si/N-DLC) were proposed as prospective and attractive approach in surface functionalization of shape memory alloy. Nitinol substrates were modified in plasma conditions, using RF CVD (Radio Frequency Chemical Vapour Deposition). The influence of plasma treatment on the useful properties of modified substrates after deposition DLC layers doped with silica and/or nitrogen atoms, as well as only pre-treated in O2 NH3 plasma atmosphere in a RF reactor was determined. The microstructure and topography of the modified surfaces were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM). Furthermore, the atomic structure of coatings was characterized by IR and Raman spectroscopy. The research also included the evaluation of surface wettability, surface energy as well as the characteristics of selected mechanical and biological properties of the layers. In addition, the corrosion properties of alloys after and before modification in the physiological saline were also investigated. In order to determine the corrosion resistance of NiTi in the Ringer solution, the potentiodynamic polarization curves (LSV – Linear Sweep Voltamperometry) were plotted. Furthermore, the evolution of corrosion potential versus immersion time of TiNi alloy in Ringer solution was performed. Based on all carried out research, the usefullness of proposed modifications of nitinol for medical applications was assessed. It was shown, inter alia, that the obtained Si-DLC layers on the surface of NiTi alloy exhibit a characteristic complex microstructure, increased surface development, which is an important aspect in improving the osteointegration of an implant. Furthermore, the modified alloy exhibits biocompatibility, the transfer of the metal (Ni, Ti) to Ringer’s solution is clearly limited.

Keywords: bioactive coatings, corrosion resistance, doped DLC structure, NiTi alloy, RF CVD

Procedia PDF Downloads 202
259 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College

Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa

Abstract:

This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.

Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling

Procedia PDF Downloads 199
258 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection

Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément

Abstract:

The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.

Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars

Procedia PDF Downloads 78
257 The Late Bronze Age Archeometallurgy of Copper in Mountainous Colchis (Lechkhumi), Georgia

Authors: Nino Sulava, Brian Gilmour, Nana Rezesidze, Tamar Beridze, Rusudan Chagelishvili

Abstract:

Studies of ancient metallurgy are a subject of worldwide current interest. Georgia with its famous early metalworking traditions is one of the central parts of in the Caucasus region. The aim of the present study is to introduce the results of archaeometallurgical investigations being undertaken in the mountain region of Colchis, Lechkhumi (the Tsageri Municipality of western Georgia) and establish their place in the existing archaeological context. Lechkhumi (one of the historic provinces of Georgia known from Georgian, Greek, Byzantine and Armenian written sources as Lechkhumi/Skvimnia/Takveri) is the part of the Colchian mountain area. It is one of the important but little known centres of prehistoric metallurgy in the Caucasian region and of Colchian Bronze Age culture. Reconnaissance archaeological expeditions (2011-2015) revealed significant prehistoric metallurgical sites in Lechkhumi. Sites located in the vicinity of Dogurashi Village (Tsageri Municipality) have become the target area for archaeological excavations. During archaeological excavations conducted in 2016-2018 two archaeometallurgical sites – Dogurashi I and Dogurashi II were investigated. As a result of an interdisciplinary (archaeological, geological and geophysical) survey, it has been established that at both prehistoric Dogurashi mountain sites, it was copper that was being smelted and the ore sources are likely to be of local origin. Radiocarbon dating results confirm they were operating between about the 13th and 9th century BC. More recently another similar site has been identified in this area (Dogurashi III), and this is about to undergo detailed investigation. Other prehistoric metallurgical sites are being located and investigated in the Lechkhumi region as well as chance archaeological finds (often in hoards) – copper ingots, metallurgical production debris, slag, fragments of crucibles, tuyeres (air delivery pipes), furnace wall fragments and other related waste debris. Other chance finds being investigated are the many copper, bronze and (some) iron artefacts that have been found over many years. These include copper ingots, copper, bronze and iron artefacts such as tools, jewelry, and decorative items. These show the important but little known or understood the role of Lechkhumi in the late Bronze Age culture of Colchis. It would seem that mining and metallurgical manufacture form part of the local agricultural yearly lifecycle. Colchian ceramics have been found and also evidence for artefact production, small stone mould fragments and encrusted material from the casting of a fylfot (swastika) form of Colchian bronze buckle found in the vicinities of the early settlements of Tskheta and Dekhviri. Excavation and investigation of previously unknown archaeometallurgical sites in Lechkhumi will contribute significantly to the knowledge and understanding of prehistoric Colchian metallurgy in western Georgia (Adjara, Guria, Samegrelo, and Svaneti) and will reveal the importance of this region in the study of ancient metallurgy in Georgia and the Caucasus. Acknowledgment: This work has been supported by the Shota Rustaveli National Science Foundation (grant FR # 217128).

Keywords: archaeometallurgy, Colchis, copper, Lechkhumi

Procedia PDF Downloads 117
256 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data

Authors: Nicola Colaninno, Eugenio Morello

Abstract:

The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.

Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing

Procedia PDF Downloads 171
255 The Design of a Phase I/II Trial of Neoadjuvant RT with Interdigitated Multiple Fractions of Lattice RT for Large High-grade Soft-Tissue Sarcoma

Authors: Georges F. Hatoum, Thomas H. Temple, Silvio Garcia, Xiaodong Wu

Abstract:

Soft Tissue Sarcomas (STS) represent a diverse group of malignancies with heterogeneous clinical and pathological features. The treatment of extremity STS aims to achieve optimal local tumor control, improved survival, and preservation of limb function. The National Comprehensive Cancer Network guidelines, based on the cumulated clinical data, recommend radiation therapy (RT) in conjunction with limb-sparing surgery for large, high-grade STS measuring greater than 5 cm in size. Such treatment strategy can offer a cure for patients. However, when recurrence occurs (in nearly half of patients), the prognosis is poor, with a median survival of 12 to 15 months and with only palliative treatment options available. The spatially-fractionated-radiotherapy (SFRT), with a long history of treating bulky tumors as a non-mainstream technique, has gained new attention in recent years due to its unconventional therapeutic effects, such as bystander/abscopal effects. Combining single fraction of GRID, the original form of SFRT, with conventional RT was shown to have marginally increased the rate of pathological necrosis, which has been recognized to have a positive correlation to overall survival. In an effort to consistently increase the pathological necrosis rate over 90%, multiple fractions of Lattice RT (LRT), a newer form of 3D SFRT, interdigitated with the standard RT as neoadjuvant therapy was conducted in a preliminary clinical setting. With favorable results of over 95% of necrosis rate in a small cohort of patients, a Phase I/II clinical study was proposed to exam the safety and feasibility of this new strategy. Herein the design of the clinical study is presented. In this single-arm, two-stage phase I/II clinical trial, the primary objectives are >80% of the patients achieving >90% tumor necrosis and to evaluation the toxicity; the secondary objectives are to evaluate the local control, disease free survival and overall survival (OS), as well as the correlation between clinical response and the relevant biomarkers. The study plans to accrue patients over a span of two years. All patient will be treated with the new neoadjuvant RT regimen, in which one of every five fractions of conventional RT is replaced by a LRT fraction with vertices receiving dose ≥10Gy while keeping the tumor periphery at or close to 2 Gy per fraction. Surgical removal of the tumor is planned to occur 6 to 8 weeks following the completion of radiation therapy. The study will employ a Pocock-style early stopping boundary to ensure patient safety. The patients will be followed and monitored for a period of five years. Despite much effort, the rarity of the disease has resulted in limited novel therapeutic breakthroughs. Although a higher rate of treatment-induced tumor necrosis has been associated with improved OS, with the current techniques, only 20% of patients with large, high-grade tumors achieve a tumor necrosis rate exceeding 50%. If this new neoadjuvant strategy is proven effective, an appreciable improvement in clinical outcome without added toxicity can be anticipated. Due to the rarity of the disease, it is hoped that such study could be orchestrated in a multi-institutional setting.

Keywords: lattice RT, necrosis, SFRT, soft tissue sarcoma

Procedia PDF Downloads 38
254 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 127
253 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification

Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos

Abstract:

Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.

Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology

Procedia PDF Downloads 126
252 Qualitative Research on German Household Practices to Ease the Risk of Poverty

Authors: Marie Boost

Abstract:

Despite activation policies, forced personal initiative to step out of unemployment and a general prosper economic situation, poverty and financial hardship constitute a crucial role in the daily lives of many families in Germany. In 2015, ~16 million persons (20.2%) of the German population are at risk of poverty or social exclusion. This is illustrated by an unemployment rate of 13.3% in the research area, located in East Germany. Despite this high amount of persons living in vulnerable households, we know little about how they manage to stabilize their lives or even overcome poverty – apart from solely relying on welfare state benefits or entering in a stable, well-paid job. Most of them are struggling in precarious living circumstances, switching from one or several short-term, low-paid jobs into self-employment or unemployment, sometimes accompanied by welfare state benefits. Hence, insecurity and uncertain future expectation form a crucial part of their lives. Within the EU-funded project “RESCuE”, resilient practices of vulnerable households were investigated in nine European countries. Approximately, 15 expert interviews with policy makers, representatives from welfare state agencies, NGOs and charity organizations and 25 household interviews have been conducted within each country. It aims to find out more about the chances and conditions of social resilience. The research is based on the triangulation of biographical narrative interviews, followed by participatory photo interviews, asking the household members to portray their typical everyday life. The presentation is focusing on the explanatory strength of this mixed-methods approach in order to show the potential of household practices to overcome financial hardship. The methodological combination allows an in-depth analysis of the families and households everyday living circumstances, including their poverty and employment situation, whether formal and informal. Active household budgeting practices, such as saving and consumption practices are based on subsistence or Do-It-Yourself work. Especially due to the photo-interviews, the importance of inherent cultural and tacit knowledge becomes obvious as it pictures their typical practices, like cultivation and gathering fruits and vegetables or going fishing. One of the central findings is the multiple purposes of these practices. They contribute to ease financial burden through consumption reduction and strengthen social ties, as they are mostly conducted with close friends or family members. In general, non-commodified practices are found to be re-commodified and to contribute to ease financial hardship, e.g. by the use of commons, barter trade or simple mutual exchange (gift exchange). These practices can substitute external purchases and reduce expenses or even generate a small income. Mixing different income sources are found to be the most likely way out of poverty within the context of a precarious labor market. But these resilient household practices take its toll as they are highly preconditioned, and many persons put themselves into risk of overstressing themselves. Thus, the potentials and risks of resilient household practices are reflected in the presentation.

Keywords: consumption practices, labor market, qualitative research, resilience

Procedia PDF Downloads 202
251 Practice Based Approach to the Development of Family Medicine Residents’ Educational Environment

Authors: Lazzat M. Zhamaliyeva, Nurgul A. Abenova, Gauhar S. Dilmagambetova, Ziyash Zh. Tanbetova, Moldir B. Ahmetzhanova, Tatyana P. Ostretcova, Aliya A. Yegemberdiyeva

Abstract:

Introduction: There are many reasons for the weak training of family doctors in Kazakhstan: the unified national educational program is not focused on competencies, the role of a general practitioner (GP) is not clear, poor funding for the health care and education system, outdated teaching and assessment methods, inefficient management. We highlight two issues in particular. Firstly, academic teachers of family medicine (FM) in Kazakhstan do not practice as family doctors; most of them are narrow specialists (pediatricians, therapists, surgeons, etc.); they usually hold one-time consultations; clinical mentors from practical healthcare (non-academic teachers) do not have the teaching competences, and the vast majority of them are also narrow specialists. Secondly, clinical sites (polyclinics) are unprepared for general practice and do not follow the principles of family medicine; residents do not like to be in primary health care (PHC) settings due to the chaos that is happening there, as well as due to the lack of the necessary equipment for mastering and consolidating practical skills. Aim: We present the concept of the family physicians’ training office (FPTO), which is being created as a friendly learning environment for young general practitioners and for the involvement of academic teachers of family medicine in the practical work and innovative development of PHC. Methodology: In developing the conceptual framework and identifying practical activities, we drew on literature and expert input, and interviews. Results: The goal of the FPTO is to create a favorable educational and clinical environment for the development of the FM residents’ competencies, in which the residents with academic teachers and clinical mentors could understand and accept the principles of family medicine, improve clinical knowledge and skills, and gain experience in improving the quality of their practice in scientific basis. Three main areas of office activity are providing primary care to the patients, improving educational services for FM residents and other medical workers, and promoting research in PHC and innovations. The office arranges for residents to see outpatients at least 50% of the time, and teachers of FM departments at least 1/4 of their working time conduct general medical appointments next to residents. Taking into account the educational and scientific workload, the number of attached population for one GP does not exceed 500 persons. The equipment of the office allows FPTO workers to perform invasive and other manipulations without being sent to other clinics. In the office, training for residents is focused on their needs and aimed at achieving the required level of competence. International methodologies and assessment tools are adapted to local conditions and evaluated for their effectiveness and acceptability. Residents and their faculty actively conduct research in the field of family medicine. Conclusions: We propose to change the learning environment in order to create teams of like-minded people, to unite residents and teachers even more for the development of family medicine. The offices will also invest resources in developing and maintaining young doctors' interest in family medicine.

Keywords: educational environment, family medicine residents, family physicians’ training office, primary care research

Procedia PDF Downloads 109
250 Development of a Psychometric Testing Instrument Using Algorithms and Combinatorics to Yield Coupled Parameters and Multiple Geometric Arrays in Large Information Grids

Authors: Laith F. Gulli, Nicole M. Mallory

Abstract:

The undertaking to develop a psychometric instrument is monumental. Understanding the relationship between variables and events is important in structural and exploratory design of psychometric instruments. Considering this, we describe a method used to group, pair and combine multiple Philosophical Assumption statements that assisted in development of a 13 item psychometric screening instrument. We abbreviated our Philosophical Assumptions (PA)s and added parameters, which were then condensed and mathematically modeled in a specific process. This model produced clusters of combinatorics which was utilized in design and development for 1) information retrieval and categorization 2) item development and 3) estimation of interactions among variables and likelihood of events. The psychometric screening instrument measured Knowledge, Assessment (education) and Beliefs (KAB) of New Addictions Research (NAR), which we called KABNAR. We obtained an overall internal consistency for the seven Likert belief items as measured by Cronbach’s α of .81 in the final study of 40 Clinicians, calculated by SPSS 14.0.1 for Windows. We constructed the instrument to begin with demographic items (degree/addictions certifications) for identification of target populations that practiced within Outpatient Substance Abuse Counseling (OSAC) settings. We then devised education items, beliefs items (seven items) and a modifiable “barrier from learning” item that consisted of six “choose any” choices. We also conceptualized a close relationship between identifying various degrees and certifications held by Outpatient Substance Abuse Therapists (OSAT) (the demographics domain) and all aspects of their education related to EB-NAR (past and present education and desired future training). We placed a descriptive (PA)1tx in both demographic and education domains to trace relationships of therapist education within these two domains. The two perceptions domains B1/b1 and B2/b2 represented different but interrelated perceptions from the therapist perspective. The belief items measured therapist perceptions concerning EB-NAR and therapist perceptions using EB-NAR during the beginning of outpatient addictions counseling. The (PA)s were written in simple words and descriptively accurate and concise. We then devised a list of parameters and appropriately matched them to each PA and devised descriptive parametric (PA)s in a domain categorized information grid. Descriptive parametric (PA)s were reduced to simple mathematical symbols. This made it easy to utilize parametric (PA)s into algorithms, combinatorics and clusters to develop larger information grids. By using matching combinatorics we took paired demographic and education domains with a subscript of 1 and matched them to the column with each B domain with subscript 1. Our algorithmic matching formed larger information grids with organized clusters in columns and rows. We repeated the process using different demographic, education and belief domains and devised multiple information grids with different parametric clusters and geometric arrays. We found benefit combining clusters by different geometric arrays, which enabled us to trace parametric variables and concepts. We were able to understand potential differences between dependent and independent variables and trace relationships of maximum likelihoods.

Keywords: psychometric, parametric, domains, grids, therapists

Procedia PDF Downloads 250
249 Service Blueprinting: A New Application for Evaluating Service Provision in the Hospice Sector

Authors: L. Sudbury-Riley, P. Hunter-Jones, L. Menzies, M. Pyrah, H. Knight

Abstract:

Just as manufacturing firms aim for zero defects, service providers strive to avoid service failures where customer expectations are not met. However, because services comprise unique human interactions, service failures are almost inevitable. Consequently, firms focus on service recovery strategies to fix problems and retain their customers for the future. Because a hospice offers care to terminally ill patients, it may not get the opportunity to correct a service failure. This situation makes the identification of what hospice users really need and want, and to ascertain perceptions of the hospice’s service delivery from the user’s perspective, even more important than for other service providers. A well-documented and fundamental barrier to improving end-of-life care is a lack of service quality measurement tools that capture the experiences of user’s from their own perspective. In palliative care, many quantitative measures are used and these focus on issues such as how quickly patients are assessed, whether they receive information leaflets, whether a discussion about their emotional needs is documented, and so on. Consequently, quality of service from the user’s perspective is overlooked. The current study was designed to overcome these limitations by adapting service blueprinting - never before used in the hospice sector - in order to undertake a ‘deep-dive’ to examine the impact of hospice services upon different users. Service blueprinting is a customer-focused approach for service innovation and improvement, where the ‘onstage’ visible service user and provider interactions must be supported by the ‘backstage’ employee actions and support processes. The study was conducted in conjunction with East Cheshire Hospice in England. The Hospice provides specialist palliative care for patients with progressive life-limiting illnesses, offering services to patients, carers and families via inpatient and outpatient units. Using service blueprinting to identify every service touchpoint, in-depth qualitative interviews with 38 in-patients, outpatients, visitors and bereaved families enabled a ‘deep-dive’ to uncover perceptions of the whole service experience among these diverse users. Interviews were recorded and transcribed, and thematic analysis of over 104,000 words of data revealed many excellent aspects of Hospice service. Staff frequently exceed people’s expectations. Striking gratifying comparisons to hospitals emerged. The Hospice makes people feel safe. Nevertheless, the technique uncovered many areas for improvement, including serendipity of referrals processes, the need for better communications with external agencies, improvements amid the daunting arrival and admissions process, a desperate need for more depression counselling, clarity of communication pertaining to actual end of life, and shortcomings in systems dealing with bereaved families. The study reveals that the adapted service blueprinting tool has major advantages of alternative quantitative evaluation techniques, including uncovering the complex nature of service user’s experiences in health-care service systems, highlighting more fully the interconnected configurations within the system and making greater sense of the impact of the service upon different service users. Unlike other tools, this in-depth examination reveals areas for improvement, many of which have already been implemented by the Hospice. The technique has potential to improve experiences of palliative and end-of-life care among patients and their families.

Keywords: hospices, end-of-life-care, service blueprinting, service delivery

Procedia PDF Downloads 171