Search results for: command line input
324 Anti-Graft Instruments and Their Role in Curbing Corruption: Integrity Pact and Its Impact on Indian Procurement
Authors: Jot Prakash Kaur
Abstract:
The paper aims to showcase that with the introduction of anti-graft instruments and willingness of the governments towards their implementation, a significant change can be witnessed in the anti-corruption landscape of any country. Since the past decade anti-graft instruments have been introduced by several international non-governmental organizations with the vision of curbing corruption. Transparency International’s ‘Integrity Pact’ has been one such initiative. Integrity Pact has been described as a tool for preventing corruption in public contracting. Integrity Pact has found its relevance in a developing country like India where public procurement constitutes 25-30 percent of Gross Domestic Product. Corruption in public procurement has been a cause of concern even though India has in place a whole architecture of rules and regulations governing public procurement. Integrity Pact was first adopted by a leading Oil and Gas government company in 2006. Till May 2015, over ninety organizations had adopted Integrity Pact, of which majority of them are central government units. The methodology undertaken to understand impact of Integrity Pact on Public procurement is through analyzing information received from important stakeholders of the instrument. Government, information was sought through Right to Information Act 2005 about the details of adoption of this instrument by various government organizations and departments. Contractor, Company websites and annual reports were used to find out the steps taken towards implementation of Integrity Pact. Civil Society, Transparency International India’s resource materials which include publications and reports on Integrity Pact were also used to understand the impact of Integrity Pact. Some of the findings of the study include organizations adopting Integrity pacts in all kinds of contracts such that 90% of their procurements fall under Integrity Pact. Indian State governments have found merit in Integrity Pact and have adopted it in their procurement contracts. Integrity Pact has been instrumental in creating a brand image of companies. External Monitors, an essential feature of Integrity Pact have emerged as arbitrators for the bidders and are the first line of procurement auditors for the organizations. India has cancelled two defense contracts finding it conflicting with the provisions of Integrity Pact. Some of the clauses of Integrity Pact have been included in the proposed Public Procurement legislation. Integrity Pact has slowly but steadily grown to become an integral part of big ticket procurement in India. Government’s commitment to implement Integrity Pact has changed the way in which public procurement is conducted in India. Public Procurement was a segment infested with corruption but with the adoption of Integrity Pact a number of clean up acts have been performed to make procurement transparent. The paper is divided in five sections. First section elaborates on Integrity Pact. Second section talks about stakeholders of the instrument and the role it plays in its implementation. Third section talks about the efforts taken by the government to implement Integrity Pact in India. Fourth section talks about the role of External Monitor as Arbitrator. The final section puts forth suggestions to strengthen the existing form of Integrity Pact and increase its reach.Keywords: corruption, integrity pact, procurement, vigilance
Procedia PDF Downloads 339323 Reading and Writing Memories in Artificial and Human Reasoning
Authors: Ian O'Loughlin
Abstract:
Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.Keywords: artificial reasoning, human memory, machine learning, neural networks
Procedia PDF Downloads 271322 Effects of Soil Neutron Irradiation in Soil Carbon Neutron Gamma Analysis
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
The carbon sequestration question of modern times requires the development of an in-situ method of measuring soil carbon over large landmasses. Traditional chemical analytical methods used to evaluate large land areas require extensive soil sampling prior to processing for laboratory analysis; collectively, this is labor-intensive and time-consuming. An alternative method is to apply nuclear physics analysis, primarily in the form of pulsed fast-thermal neutron-gamma soil carbon analysis. This method is based on measuring the gamma-ray response that appears upon neutron irradiation of soil. Specific gamma lines with energies of 4.438 MeV appearing from neutron irradiation can be attributed to soil carbon nuclei. Based on measuring gamma line intensity, assessments of soil carbon concentration can be made. This method can be done directly in the field using a specially developed pulsed fast-thermal neutron-gamma system (PFTNA system). This system conducts in-situ analysis in a scanning mode coupled with GPS, which provides soil carbon concentration and distribution over large fields. The system has radiation shielding to minimize the dose rate (within radiation safety guidelines) for safe operator usage. Questions concerning the effect of neutron irradiation on soil health will be addressed. Information regarding absorbed neutron and gamma dose received by soil and its distribution with depth will be discussed in this study. This information was generated based on Monte-Carlo simulations (MCNP6.2 code) of neutron and gamma propagation in soil. Received data were used for the analysis of possible induced irradiation effects. The physical, chemical and biological effects of neutron soil irradiation were considered. From a physical aspect, we considered neutron (produced by the PFTNA system) induction of new isotopes and estimated the possibility of increasing the post-irradiation gamma background by comparisons to the natural background. An insignificant increase in gamma background appeared immediately after irradiation but returned to original values after several minutes due to the decay of short-lived new isotopes. From a chemical aspect, possible radiolysis of water (presented in soil) was considered. Based on stimulations of radiolysis of water, we concluded that the gamma dose rate used cannot produce gamma rays of notable rates. Possible effects of neutron irradiation (by the PFTNA system) on soil biota were also assessed experimentally. No notable changes were noted at the taxonomic level, nor was functional soil diversity affected. Our assessment suggested that the use of a PFTNA system with a neutron flux of 1e7 n/s for soil carbon analysis does not notably affect soil properties or soil health.Keywords: carbon sequestration, neutron gamma analysis, radiation effect on soil, Monte-Carlo simulation
Procedia PDF Downloads 143321 Control of Belts for Classification of Geometric Figures by Artificial Vision
Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez
Abstract:
The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB
Procedia PDF Downloads 379320 Citation Analysis of New Zealand Court Decisions
Authors: Tobias Milz, L. Macpherson, Varvara Vetrova
Abstract:
The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.Keywords: case citation network, citation analysis, network analysis, Neo4j
Procedia PDF Downloads 108319 Television Sports Exposure and Rape Myth Acceptance: The Mediating Role of Sexual Objectification of Women
Authors: Sofia Mariani, Irene Leo
Abstract:
The objective of the present study is to define the mediating role of attitudes that objectify and devalue women (hostile sexism, benevolent sexism, and sexual objectification of women) in the indirect correlation between exposure to televised sports and acceptance of rape myths. A second goal is to contribute to research on the topic by defining the role of mediators in exposure to different types of sports, following the traditional gender classification of sports. Data collection was carried out by means of an online questionnaire, measuring television sport exposure, sport type, hostile sexism, benevolent sexism, and sexual objectification of women. Data analysis was carried out using IBM SPSS software. The model used was created using Ordinary Least Squares (OLS) regression path analysis. The predictor variable in the model was television sports exposure, the outcome was rape myths acceptance, and the mediators were (1) hostile sexism, (2) benevolent sexism, and (3) sexual objectification of women. Correlation analyses were carried out dividing by sport type and controlling for the participants’ gender. As seen in existing literature, television sports exposure was found to be indirectly and positively related to rape myth acceptance through the mediating role of: (1) hostile sexism, (2) benevolent sexism, and (3) sexual objectification of women. The type of sport watched influenced the role of the mediators: hostile sexism was found to be the common mediator to all sports type, exposure to traditionally considered feminine or neutral sports showed the additional mediation effect of sexual objectification of women. In line with existing literature, controlling for gender showed that the only significant mediators were hostile sexism for male participants and benevolent sexism for female participants. Given the prevalence of men among the viewers of traditionally considered masculine sports, the correlation between television sports exposure and rape myth acceptance through the mediation of hostile sexism is likely due to the gender of the participants. However, this does not apply to the viewers of traditionally considered feminine and neutral sports, as this group is balanced in terms of gender and shows a unique mediation: the correlation between television sports exposure and rape myth acceptance is mediated by both hostile sexism and sexual objectification. Given that hostile sexism is defined as hostility towards women who oppose or fail to conform to traditional gender roles, these findings confirm that sport is perceived as a non-traditional activity for women. Additionally, these results imply that the portrayal of women in traditionally considered feminine and neutral sports - which are defined as such because of their aesthetic characteristics - may have a strong component of sexual objectification of women. The present research contributes to defining the association between sports exposure and rape myth acceptance through the mediation effects of sexist attitudes and sexual objectification of women. The results of this study have practical implications, such as supporting the feminine sports teams who ask for more practical and less revealing uniforms, more similar to their male colleagues and therefore less objectifying.Keywords: television exposure, sport, rape myths, objectification, sexism
Procedia PDF Downloads 100318 Towards Bridging the Gap between the ESP Classroom and the Workplace: Content and Language Needs Analysis in English for an Administrative Studies Course
Authors: Vesna Vulić
Abstract:
Croatia has made large steps forward in the development of higher education over the past 10 years. Purposes and objectives of the tertiary education system are focused on the personal development of young people so that they obtain competences for employment on a flexible labour market. The most frequent tensions between the tertiary institutions and employers are complaints that the current tertiary education system still supplies students with an abundance of theoretical knowledge and not enough practical skills. Polytechnics and schools of professional higher education should deliver professional education and training that will satisfy the needs of their local communities. The 21st century sets demand on undergraduates as well as their lecturers to strive for the highest standards. The skills students acquire during their studies should serve the needs of their future professional careers. In this context, teaching English for Specific Purposes (ESP) presents an enormous challenge for teachers. They have to cope with teaching the language in classes with a large number of students, limitations of time, inadequate equipment and teaching material; most frequently, this leads to focusing on specialist vocabulary neglecting the development of skills and competences required for future employment. Globalization has transformed the labour market and set new standards a perspective employee should meet. When knowledge of languages is considered, new generic skills and competences are required. Not only skillful written and oral communication is needed, but also information, media, and technology literacy, learning skills which include critical and creative thinking, collaborating and communicating, as well as social skills. The aim of this paper is to evaluate the needs of two groups of ESP first year Undergraduate Professional Administrative Study students taking ESP as a mandatory course: 47 first-year Undergraduate Professional Administrative Study students, 21 first-year employed part-time Undergraduate Professional Administrative Study students and 30 graduates with a degree in Undergraduate Professional Administrative Study with various amounts of work experience. The survey adopted a quantitative approach with the aim to determine the differences between the groups in their perception of the four language skills and different areas of law, as well as getting the insight into students' satisfaction with the current course and their motivation for studying ESP. Their perceptions will be compared to the results of the questionnaire conducted among sector professionals in order to examine how they perceive the same elements of the ESP course content and to what extent it fits into their working environment. The results of the survey indicated that there is a strong correlation between acquiring work experience and the level of importance given to particular areas of law studied in an ESP course which is in line with our initial hypothesis. In conclusion, the results of the survey should help lecturers in re-evaluating and updating their ESP course syllabi.Keywords: English for Specific Purposes (ESP), language skills, motivation, needs analysis
Procedia PDF Downloads 300317 Impact of Chess Intervention on Cognitive Functioning of Children
Authors: Ebenezer Joseph
Abstract:
Chess is a useful tool to enhance general and specific cognitive functioning in children. The present study aims to assess the impact of chess on cognitive in children and to measure the differential impact of socio-demographic factors like age and gender of the child on the effectiveness of the chess intervention.This research study used an experimental design to study the impact of the Training in Chess on the intelligence of children. The Pre-test Post-test Control Group Design was utilized. The research design involved two groups of children: an experimental group and a control group. The experimental group consisted of children who participated in the one-year Chess Training Intervention, while the control group participated in extra-curricular activities in school. The main independent variable was training in chess. Other independent variables were gender and age of the child. The dependent variable was the cognitive functioning of the child (as measured by IQ, working memory index, processing speed index, perceptual reasoning index, verbal comprehension index, numerical reasoning, verbal reasoning, non-verbal reasoning, social intelligence, language, conceptual thinking, memory, visual motor and creativity). The sample consisted of 200 children studying in Government and Private schools. Random sampling was utilized. The sample included both boys and girls falling in the age range 6 to 16 years. The experimental group consisted of 100 children (50 from Government schools and 50 from Private schools) with an equal representation of boys and girls. The control group similarly consisted of 100 children. The dependent variables were assessed using Binet-Kamat Test of Intelligence, Wechsler Intelligence Scale for Children - IV (India) and Wallach Kogan Creativity Test. The training methodology comprised Winning Moves Chess Learning Program - Episodes 1–22, lectures with the demonstration board, on-the-board playing and training, chess exercise through workbooks (Chess school 1A, Chess school 2, and tactics) and working with chess software. Further students games were mapped using chess software and the brain patterns of the child were understood. They were taught the ideas behind chess openings and exposure to classical games were also given. The children participated in mock as well as regular tournaments. Preliminary analysis carried out using independent t tests with 50 children indicates that chess training has led to significant increases in the intelligent quotient. Children in the experimental group have shown significant increases in composite scores like working memory and perceptual reasoning. Chess training has significantly enhanced the total creativity scores, line drawing and pattern meaning subscale scores. Systematically learning chess as part of school activities appears to have a broad spectrum of positive outcomes.Keywords: chess, intelligence, creativity, children
Procedia PDF Downloads 257316 Good Governance Complementary to Corruption Abatement: A Cross-Country Analysis
Authors: Kamal Ray, Tapati Bhattacharya
Abstract:
Private use of public office for private gain could be a tentative definition of corruption and most distasteful event of corruption is that it is not there, nor that it is pervasive, but it is socially acknowledged in the global economy, especially in the developing nations. We attempted to assess the interrelationship between the Corruption perception index (CPI) and the principal components of governance indicators as per World Bank like Control of Corruption (CC), rule of law (RL), regulatory quality (RQ) and government effectiveness (GE). Our empirical investigation concentrates upon the degree of reflection of governance indicators upon the CPI in order to single out the most powerful corruption-generating indicator in the selected countries. We have collected time series data on above governance indicators such as CC, RL, RQ and GE of the selected eleven countries from the year of 1996 to 2012 from World Bank data set. The countries are USA, UK, France, Germany, Greece, China, India, Japan, Thailand, Brazil, and South Africa. Corruption Perception Index (CPI) of the countries mentioned above for the period of 1996 to 2012is also collected. Graphical method of simple line diagram against the time series data on CPI is applied for quick view for the relative positions of different trend lines of different nations. The correlation coefficient is enough to assess primarily the degree and direction of association between the variables as we get the numerical data on governance indicators of the selected countries. The tool of Granger Causality Test (1969) is taken into account for investigating causal relationships between the variables, cause and effect to speak of. We do not need to verify stationary test as length of time series is short. Linear regression is taken as a tool for quantification of a change in explained variables due to change in explanatory variable in respect of governance vis a vis corruption. A bilateral positive causal link between CPI and CC is noticed in UK, index-value of CC increases by 1.59 units as CPI increases by one unit and CPI rises by 0.39 units as CC rises by one unit, and hence it has a multiplier effect so far as reduction in corruption is concerned in UK. GE causes strongly to the reduction of corruption in UK. In France, RQ is observed to be a most powerful indicator in reducing corruption whereas it is second most powerful indicator after GE in reducing of corruption in Japan. Governance-indicator like GE plays an important role to push down the corruption in Japan. In China and India, GE is proactive as well as influencing indicator to curb corruption. The inverse relationship between RL and CPI in Thailand indicates that ongoing machineries related to RL is not complementary to the reduction of corruption. The state machineries of CC in S. Africa are highly relevant to reduce the volume of corruption. In Greece, the variations of CPI positively influence the variations of CC and the indicator like GE is effective in controlling corruption as reflected by CPI. All the governance-indicators selected so far have failed to arrest their state level corruptions in USA, Germany and Brazil.Keywords: corruption perception index, governance indicators, granger causality test, regression
Procedia PDF Downloads 304315 Drivers of Satisfaction and Dissatisfaction in Camping Tourism: A Case Study from Croatia
Authors: Darko Prebežac, Josip Mikulić, Maja Šerić, Damir Krešić
Abstract:
Camping tourism is recognized as a growing segment of the broader tourism industry, currently evolving from an inexpensive, temporary sojourn in a rural environment into a highly fragmented niche tourism sector. The trends among public-managed campgrounds seem to be moving away from rustic campgrounds that provide only a tent pad and a fire ring to more developed facilities that offer a range of different amenities, where campers still search for unique experiences that go above the opportunity to experience nature and social interaction. In addition, while camping styles and options changed significantly over the last years, coastal camping in particular became valorized as is it regarded with a heightened sense of nostalgia. Alongside this growing interest in the camping tourism, a demand for quality servicing infrastructure emerged in order to satisfy the wide variety of needs, wants, and expectations of an increasingly demanding traveling public. However, camping activity in general and quality of camping experience and campers’ satisfaction in particular remain an under-researched area of the tourism and consumption behavior literature. In this line, very few studies addressed the issue of quality product/service provision in satisfying nature based tourists and in driving their future behavior with respect to potential re-visitation and recommendation intention. The present study thus aims to investigate the drivers of positive and negative campsite experience using the case of Croatia. Due to the well-preserved nature and indented coastline, camping tourism has a long tradition in Croatia and represents one of the most important and most developed tourism products. During the last decade the number of tourist overnights in Croatian camps has increased by 26% amounting to 16.5 million in 2014. Moreover, according to Eurostat the market share of campsites in the EU is around 14%, indicating that the market share of Croatian campsites is almost double large compared to the EU average. Currently, there are a total of 250 camps in Croatia with approximately 75.8 thousands accommodation units. It is further noteworthy that Croatian camps have higher average occupancy rates and a higher average length of stay as compared to the national average of all types of accommodation. In order to explore the main drivers of positive and negative campsite experiences, this study uses principal components analysis (PCA) and an impact-asymmetry analysis (IAA). Using the PCA, first the main dimensions of the campsite experience are extracted in an exploratory manner. Using the IAA, the extracted factors are investigated for their potentials to create customer delight and/or frustration. The results provide valuable insight to both researchers and practitioners regarding the understanding of campsite satisfaction.Keywords: Camping tourism, campsite, impact-asymmetry analysis, satisfaction
Procedia PDF Downloads 186314 Brief Cognitive Behavior Therapy (BCBT) in a Japanese School Setting: Preliminary Outcomes on a Single Arm Study
Authors: Yuki Matsumoto, Yuma Ishimoto
Abstract:
Cognitive Behavior Therapy (CBT) with children has shown effective application to various problems such as anxiety and depression. Although there are barriers to access to mental health services including lack of professional services in communities and parental concerns about stigma, school has a significant role to address children’s health problems. Schools are regarded as a suitable arena for prevention and early intervention of mental health problems. In this line, CBT can be adaptable to school education and useful to enhance students’ social and emotional skills. However, Japanese school curriculum is rigorous so as to limit available time for implementation of CBT in schools. This paper describes Brief Cognitive Behavior Therapy (BCBT) with children in a Japanese school setting. The program has been developed in order to facilitate acceptability of CBT in schools and aimed to enhance students’ skills to manage anxiety and difficult behaviors. The present research used a single arm design in which 30 students aged 9-10 years old participated. The authors provided teachers a CBT training workshop (two hours) at two primary schools in Tokyo metropolitan area and recruited participants in the research. A homeroom teacher voluntarily delivered a 6-session BCBT program (15 minutes each) in classroom periods which is called as Kaerinokai, a meeting before leaving school. Students completed a questionnaire sheet at pre- and post-periods under the supervision of the teacher. The sheet included the Spence Child Anxiety Scale (SCAS), the Depression Self-Rating Scale for Children (DSRS), and the Strengths and Difficulties Questionnaire (SDQ). The teacher was asked for feedback after the completion. Significant positive changes were found in the total and five of six sub-scales of the SCAS and the total difficulty scale of the SDQ. However, no significant changes were seen in Physical Injury Fear sub-scale of the SCAS, in the DSRS or the Prosocial sub-scale of the SDQ. The effect sizes are mostly between small and medium. The teacher commented that the program was easy to use and found positive changes in classroom activities and personal relationships. This preliminary research showed the feasibility of the BCBT in a school setting. The results suggest that the BCBT offers effective treatment for reduction in anxiety and in difficult behaviors. There is a good prospect of the BCBT suggesting that BCBT may be easier to be delivered than CBT by Japanese teachers to promote child mental health. The study has limitations including no control group, small sample size, or a short teacher training. Future research should address these limitations.Keywords: brief cognitive behavior therapy, cognitive behavior therapy, mental health services in schools, teacher training workshop
Procedia PDF Downloads 333313 Valuing Cultural Ecosystem Services of Natural Treatment Systems Using Crowdsourced Data
Authors: Andrea Ghermandi
Abstract:
Natural treatment systems such as constructed wetlands and waste stabilization ponds are increasingly used to treat water and wastewater from a variety of sources, including stormwater and polluted surface water. The provision of ancillary benefits in the form of cultural ecosystem services makes these systems unique among water and wastewater treatment technologies and greatly contributes to determine their potential role in promoting sustainable water management practices. A quantitative analysis of these benefits, however, has been lacking in the literature. Here, a critical assessment of the recreational and educational benefits in natural treatment systems is provided, which combines observed public use from a survey of managers and operators with estimated public use as obtained using geotagged photos from social media as a proxy for visitation rates. Geographic Information Systems (GIS) are used to characterize the spatial boundaries of 273 natural treatment systems worldwide. Such boundaries are used as input for the Application Program Interfaces (APIs) of two popular photo-sharing websites (Flickr and Panoramio) in order to derive the number of photo-user-days, i.e., the number of yearly visits by individual photo users in each site. The adequateness and predictive power of four univariate calibration models using the crowdsourced data as a proxy for visitation are evaluated. A high correlation is found between photo-user-days and observed annual visitors (Pearson's r = 0.811; p-value < 0.001; N = 62). Standardized Major Axis (SMA) regression is found to outperform Ordinary Least Squares regression and count data models in terms of predictive power insofar as standard verification statistics – such as the root mean square error of prediction (RMSEP), the mean absolute error of prediction (MAEP), the reduction of error (RE), and the coefficient of efficiency (CE) – are concerned. The SMA regression model is used to estimate the intensity of public use in all 273 natural treatment systems. System type, influent water quality, and area are found to statistically affect public use, consistently with a priori expectations. Publicly available information regarding the home location of the sampled visitors is derived from their social media profiles and used to infer the distance they are willing to travel to visit the natural treatment systems in the database. Such information is analyzed using the travel cost method to derive monetary estimates of the recreational benefits of the investigated natural treatment systems. Overall, the findings confirm the opportunities arising from an integrated design and management of natural treatment systems, which combines the objectives of water quality enhancement and provision of cultural ecosystem services through public use in a multi-functional approach and compatibly with the need to protect public health.Keywords: constructed wetlands, cultural ecosystem services, ecological engineering, waste stabilization ponds
Procedia PDF Downloads 180312 Evaluation of Requests and Outcomes of Magnetic Resonance Imaging Assessing for Cauda Equina Syndrome at a UK Trauma Centre
Authors: Chris Cadman, Marcel Strauss
Abstract:
Background: In 2020, the University Hospital Wishaw in the United Kingdom became the centre for trauma and orthopaedics within its health board. This resulted in the majority of patients with suspected cauda equina syndrome (CES) being assessed and imaged at this site, putting an increased demand on MR imaging and displacing other previous activity. Following this transition, imaging requests for CES did not always follow national guidelines and would often be missing important clinical and safety information. There also appeared to be a very low positive scan rate compared with previously reported studies. In an attempt to improve patient selection and reduce the burden of CES imaging at this site clinical audit was performed. Methods: A total of 250 consecutive patients imaged to assess for CES were evaluated. Patients had to have presented to either the emergency or orthopaedic department acutely with a presenting complaint of suspected CES. Patients were excluded if they were not admitted acutely or were assessed by other clinical specialities. In total, 233 patients were included. Requests were assessed for appropriate clinical history, accurate and complete clinical assessment and MRI safety information. Clinical assessment was allocated a score of 1-6 based on information relating to history of pain, level of pain, dermatomes/myotomes affected, peri-anal paraesthesia/anaesthesia, anal tone and post-void bladder volume with each element scoring one point. Images were assessed for positive findings of CES, acquired spinal stenosis or nerve root compression. Results: Overall, 73% of requests had a clear clinical history of CES. The urgency of the request for imaging was given in 23% of cases. The mean clinical assessment score was 3.7 out of a total of 6. Overall, 2% of scans were positive for CES, 29% had acquired spinal stenosis and 30% had nerve root compression. For patients with CES, 75% had acute neurological signs compared with 68% of the study population. CES patients had a mean clinical history score of 5.3 compared with 3.7 for the study population. Overall, 95% of requests had appropriate MRI safety information. Discussion: it study included 233 patients who underwent specialist assessment and referral for MR imaging for suspected CES. Despite the serious nature of this condition, a large proportion of imaging requests did not have a clear clinical query of CES and the level of urgency was not given, which could potentially lead to a delay in imaging and treatment. Clinical examination was often also incomplete, which can make triaging of patients presenting with similar symptoms challenging. The positive rate for CES was only 2%, much below other studies which had positive rates of 6–40% with a large meta-analysis finding a mean positive rate of 19%. These findings demonstrate an opportunity to improve the quality of imaging requests for suspected CES. This may help to improve patient selection for imaging and result in a positive rate for CES imaging that is more in line with other centres.Keywords: cauda equina syndrome, acute back pain, MRI, spine
Procedia PDF Downloads 11311 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies
Authors: Roberta Martino, Viviana Ventre
Abstract:
Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty
Procedia PDF Downloads 129310 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India
Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit
Abstract:
Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique
Procedia PDF Downloads 127309 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions
Authors: Vikrant Gupta, Amrit Goswami
Abstract:
The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition
Procedia PDF Downloads 136308 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 105307 3D Interactions in Under Water Acoustic Simulations
Authors: Prabu Duplex
Abstract:
Due to stringent emission regulation targets, large-scale transition to renewable energy sources is a global challenge, and wind power plays a significant role in the solution vector. This scenario has led to the construction of offshore wind farms, and several wind farms are planned in the shallow waters where the marine habitat exists. It raises concerns over impacts of underwater noise on marine species, for example bridge constructions in the ocean straits. Dangerous to aquatic life, the environmental organisations say, the bridge would be devastating, since ocean straits are important place of transit for marine mammals. One of the highest concentrations of biodiversity in the world is concentrated these areas. The investigation of ship noise and piling noise that may happen during bridge construction and in operation is therefore vital. Once the source levels are known the receiver levels can be modelled. With this objective this work investigates the key requirement of the software that can model transmission loss in high frequencies that may occur during construction or operation phases. Most propagation models are 2D solutions, calculating the propagation loss along a transect, which does not include horizontal refraction, reflection or diffraction. In many cases, such models provide sufficient accuracy and can provide three-dimensional maps by combining, through interpolation, several two-dimensional (distance and depth) transects. However, in some instances the use of 2D models may not be sufficient to accurately model the sound propagation. A possible example includes a scenario where an island or land mass is situated between the source and receiver. The 2D model will result in a shadow behind the land mass where the modelled transects intersect the land mass. Diffraction will occur causing bending of the sound around the land mass. In such cases, it may be necessary to use a 3D model, which accounts for horizontal diffraction to accurately represent the sound field. Other scenarios where 2D models may not provide sufficient accuracy may be environments characterised by a strong up-sloping or down sloping seabed, such as propagation around continental shelves. In line with these objectives by means of a case study, this work addresses the importance of 3D interactions in underwater acoustics. The methodology used in this study can also be used for other 3D underwater sound propagation studies. This work assumes special significance given the increasing interest in using underwater acoustic modeling for environmental impacts assessments. Future work also includes inter-model comparison in shallow water environments considering more physical processes known to influence sound propagation, such as scattering from the sea surface. Passive acoustic monitoring of the underwater soundscape with distributed hydrophone arrays is also suggested to investigate the 3D propagation effects as discussed in this article.Keywords: underwater acoustics, naval, maritime, cetaceans
Procedia PDF Downloads 19306 Analytical and Numerical Modeling of Strongly Rotating Rarefied Gas Flows
Authors: S. Pradhan, V. Kumaran
Abstract:
Centrifugal gas separation processes effect separation by utilizing the difference in the mole fraction in a high speed rotating cylinder caused by the difference in molecular mass, and consequently the centrifugal force density. These have been widely used in isotope separation because chemical separation methods cannot be used to separate isotopes of the same chemical species. More recently, centrifugal separation has also been explored for the separation of gases such as carbon dioxide and methane. The efficiency of separation is critically dependent on the secondary flow generated due to temperature gradients at the cylinder wall or due to inserts, and it is important to formulate accurate models for this secondary flow. The widely used Onsager model for secondary flow is restricted to very long cylinders where the length is large compared to the diameter, the limit of high stratification parameter, where the gas is restricted to a thin layer near the wall of the cylinder, and it assumes that there is no mass difference in the two species while calculating the secondary flow. There are two objectives of the present analysis of the rarefied gas flow in a rotating cylinder. The first is to remove the restriction of high stratification parameter, and to generalize the solutions to low rotation speeds where the stratification parameter may be O (1), and to apply for dissimilar gases considering the difference in molecular mass of the two species. Secondly, we would like to compare the predictions with molecular simulations based on the direct simulation Monte Carlo (DSMC) method for rarefied gas flows, in order to quantify the errors resulting from the approximations at different aspect ratios, Reynolds number and stratification parameter. In this study, we have obtained analytical and numerical solutions for the secondary flows generated at the cylinder curved surface and at the end-caps due to linear wall temperature gradient and external gas inflow/outflow at the axis of the cylinder. The effect of sources of mass, momentum and energy within the flow domain are also analyzed. The results of the analytical solutions are compared with the results of DSMC simulations for three types of forcing, a wall temperature gradient, inflow/outflow of gas along the axis, and mass/momentum input due to inserts within the flow. The comparison reveals that the boundary conditions in the simulations and analysis have to be matched with care. The commonly used diffuse reflection boundary conditions at solid walls in DSMC simulations result in a non-zero slip velocity as well as a temperature slip (gas temperature at the wall is different from wall temperature). These have to be incorporated in the analysis in order to make quantitative predictions. In the case of mass/momentum/energy sources within the flow, it is necessary to ensure that the homogeneous boundary conditions are accurately satisfied in the simulations. When these precautions are taken, there is excellent agreement between analysis and simulations, to within 10 %, even when the stratification parameter is as low as 0.707, the Reynolds number is as low as 100 and the aspect ratio (length/diameter) of the cylinder is as low as 2, and the secondary flow velocity is as high as 0.2 times the maximum base flow velocity.Keywords: rotating flows, generalized onsager and carrier-Maslen model, DSMC simulations, rarefied gas flow
Procedia PDF Downloads 398305 Implementation of a Multidisciplinary Weekly Safety Briefing in a Tertiary Paediatric Cardiothoracic Transplant Unit
Authors: Lauren Dhugga, Meena Parameswaran, David Blundell, Abbas Khushnood
Abstract:
Context: A multidisciplinary weekly safety briefing was implemented at the Paediatric Cardiothoracic Unit at the Freeman Hospital in Newcastle-upon-Tyne. It is a tertiary referral centre with a quarternary cardiac paediatric intensive care unit and provides complexed care including heart and lung transplants, mechanical support and advanced heart failure assessment. Aim: The aim of this briefing is to provide a structured platform of communication, in an effort to improve efficiency, safety, and patient care. Problem: The paediatric cardiothoracic unit is made up of a vast multidisciplinary team including doctors, intensivists, anaesthetists, surgeons, specialist nurses, echocardiogram technicians, physiotherapists, psychologists, dentists, and dietitians. It provides care for children with congenital and acquired cardiac disease and is one of only two units in the UK to offer paediatric heart transplant. The complexity of cases means that there can be many teams involved in providing care to each patient, and frequent movement of children between ward, high dependency, and intensive care areas. Currently, there is no structured forum for communicating important information across the department, for example, staffing shortages, prescribing errors and significant events. Strategy: An initial survey questioning the need for better communication found 90% of respondents agreed that they could think of an incident that had occurred due to ineffective communication, and 85% felt that incident could have been avoided had there been a better form of communication. Lastly, 80% of respondents felt that a weekly 60 second safety briefing would be beneficial to improve communication within our multidisciplinary team. Based on those promising results, a weekly 60 second safety briefing was implemented to be conducted on a Monday morning. The safety briefing covered four key areas (SAFE): staffing, awareness, fix and events. This was to highlight any staffing gaps, any incident reports to be learned from, any issues that required fixing and any events including teachings for the week ahead. The teams were encouraged to email suggestions or issues to be raised for the week or to approach in person with information to add. The safety briefing was implemented using change theory. Effect: The safety briefing has been trialled over 6 weeks and has received a good buy in from staff across specialties. The aim is to embed this safety briefing into a weekly meeting using the PDSA cycle. There will be a second survey in one month to assess the efficacy of the safety briefing and to continue to improve the delivery of information. The project will be presented at the next clinical governance briefing to attract wider feedback and input from across the trust. Lessons: The briefing displays promise as a tool to improve vigilance and communication in a busy multi-disciplinary unit. We have learned about how to implement quality improvement and about the culture of our hospital - how hierarchy influences change. We demonstrate how to implement change through a grassroots process, using a junior led briefing to improve the efficiency, safety, and communication in the workplace.Keywords: briefing, communication, safety, team
Procedia PDF Downloads 143304 Using Machine Learning to Extract Patient Data from Non-standardized Sports Medicine Physician Notes
Authors: Thomas Q. Pan, Anika Basu, Chamith S. Rajapakse
Abstract:
Machine learning requires data that is categorized into features that models train on. This topic is important to the field of sports medicine due to the many tools it provides to physicians such as diagnosis support and risk assessment. Physician note that healthcare professionals take are usually unclean and not suitable for model training. The objective of this study was to develop and evaluate an advanced approach for extracting key features from sports medicine data without the need for extensive model training or data labeling. An LLM (Large Language Model) was given a narrative (Physician’s Notes) and prompted to extract four features (details about the patient). The narrative was found in a datasheet that contained six columns: Case Number, Validation Age, Validation Gender, Validation Diagnosis, Validation Body Part, and Narrative. The validation columns represent the accurate responses that the LLM attempts to output. With the given narrative, the LLM would output its response and extract the age, gender, diagnosis, and injured body part with each category taking up one line. The output would then be cleaned, matched, and added to new columns containing the extracted responses. Five ways of checking the accuracy were used: unclear count, substring comparison, LLM comparison, LLM re-check, and hand-evaluation. The unclear count essentially represented the extractions the LLM missed. This can be also understood as the recall score ([total - false negatives] over total). The rest of these correspond to the precision score ([total - false positives] over total). Substring comparison evaluated the validation (X) and extracted (Y) columns’ likeness by checking if X’s results were a substring of Y's findings and vice versa. LLM comparison directly asked an LLM if the X and Y’s results were similar. LLM Re-check prompted the LLM to see if the extracted results can be found in the narrative. Lastly, A selection of 1,000 random narratives was also selected and hand-evaluated to give an estimate of how well the LLM-based feature extraction model performed. With a selection of 10,000 narratives, the LLM-based approach had a recall score of roughly 98%. However, the precision scores of the substring comparison and LLM comparison models were around 72% and 76% respectively. The reason for these low figures is due to the minute differences between answers. For example, the ‘chest’ is a part of the ‘upper trunk’ however, these models cannot detect that. On the other hand, the LLM re-check and subset of hand-tested narratives showed a precision score of 96% and 95%. If this subset is used to extrapolate the possible outcome of the whole 10,000 narratives, the LLM-based approach would be strong in both precision and recall. These results indicated that an LLM-based feature extraction model could be a useful way for medical data in sports to be collected and analyzed by machine learning models. Wide use of this method could potentially increase the availability of data thus improving machine learning algorithms and supporting doctors with more enhanced tools. Procedia PDF Downloads 2303 The Role of Structural Poverty in the Know-How and Moral Economy of Doctors in Africa: An Anthropological Perspective
Authors: Isabelle Gobatto
Abstract:
Based on an anthropological approach, this paper explores the medical profession and the construction of medical practices by considering the multiform articulations between structural poverty and the production of care from a low-resource francophone West African country, Burkina Faso. This country is considered in its exemplary dimension of culturally differentiated countries of the African continent that share the same situation of structural poverty. The objective is to expose the effects of structural poverty on the ways of constructing professional knowledge and thinking about the sense of the medical profession. If doctors are trained to have the same capacities in South and West countries, which are to treat and save lives whatever the cultural contexts of the practice of medicine, the ways of investing their role and of dealing with this context of action fracture the homogenization of the medical profession. In the line of anthropology of biomedicine, this paper outlines the complex effects of structural poverty on health care, care relations, and the moral economy of doctors. The materials analyzed are based on an ethnography including two temporalities located thirty years apart (1990-1994 and 2020-2021), based on long-term observations of care practices conducted in healthcare institutions, interviews coupled with the life histories of physicians. The findings reveal that disabilities faced by doctors to deliver care are interpreted as policy gaps, but they are also considered by physicians as constitutive of the social and cultural characteristics of patients, making their capacities and incapacities in terms of accompanying caregivers in the production of care. These perceptions have effects on know-how, structured around the need to act even when diagnoses are not made so as not to see patients desert health structures if the costs of care are too high for them. But these interpretations of highly individualizing dimensions of these difficulties place part of the blame on patients for the difficulties in using learned knowledge and delivering effective care. These situations challenge the ethics of caregivers but also of ethnologists. Firstly because the interpretations of disabilities prevent caregivers from considering vulnerabilities of care as constituting a common condition shared with their patients in these health systems, affecting them in an identical way although in different places in the production of care. Correlatively, these results underline that these professional conceptions prevent the emergence of a figure of victim, which could be shared between patients and caregivers who, together, undergo working and care conditions at the limit of the acceptable. This dimension directly involves politics. Secondly, structural poverty and its effects on care challenge the ethics of the anthropologist who observes caregivers producing, without intent to arm, experiences of care marked by an ordinary violence, by not giving them the care they need. It is worth asking how anthropologists could get doctors to think in this light in west-African societies.Keywords: Africa, care, ethics, poverty
Procedia PDF Downloads 69302 Threats to the Business Value: The Case of Mechanical Engineering Companies in the Czech Republic
Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak
Abstract:
Successful achievement of strategic goals requires an effective performance management system, i.e. determining the appropriate indicators measuring the rate of goal achievement. Assuming that the goal of the owners is to grow the assets they invested in, it is vital to identify the key performance indicators, which contribute to value creation. These indicators are known as value drivers. Based on the undertaken literature search, a value driver is defined as any factor that affects the value of an enterprise. The important factors are then monitored by both financial and non-financial indicators. Financial performance indicators are most useful in strategic management, since they indicate whether a company's strategy implementation and execution are contributing to bottom line improvement. Non-financial indicators are mainly used for short-term decisions. The identification of value drivers, however, is problematic for companies which are not publicly traded. Therefore financial ratios continue to be used to measure the performance of companies, despite their considerable criticism. The main drawback of such indicators is the fact that they are calculated based on accounting data, while accounting rules may differ considerably across different environments. For successful enterprise performance management it is vital to avoid factors that may reduce (or even destroy) its value. Among the known factors reducing the enterprise value are the lack of capital, lack of strategic management system and poor quality of production. In order to gain further insight into the topic, the paper presents results of the research identifying factors that adversely affect the performance of mechanical engineering enterprises in the Czech Republic. The research methodology focuses on both the qualitative and the quantitative aspect of the topic. The qualitative data were obtained from a questionnaire survey of the enterprises senior management, while the quantitative financial data were obtained from the Analysis Major Database for European Sources (AMADEUS). The questionnaire prompted managers to list factors which negatively affect business performance of their enterprises. The range of potential factors was based on a secondary research – analysis of previously undertaken questionnaire surveys and research of studies published in the scientific literature. The results of the survey were evaluated both in general, by average scores, and by detailed sub-analyses of additional criteria. These include the company specific characteristics, such as its size and ownership structure. The evaluation also included a comparison of the managers’ opinions and the performance of their enterprises – measured by return on equity and return on assets ratios. The comparisons were tested by a series of non-parametric tests of statistical significance. The results of the analyses show that the factors most detrimental to the enterprise performance include the incompetence of responsible employees and the disregard to the customers‘ requirements.Keywords: business value, financial ratios, performance measurement, value drivers
Procedia PDF Downloads 222301 Globalization of Pesticide Technology and Sustainable Agriculture
Authors: Gagandeep Kaur
Abstract:
The pesticide industry is a big supplier of agricultural inputs. The uses of pesticides control weeds, fungal diseases, etc., which causes of yield losses in agricultural production. In agribusiness and agrichemical industry, Globalization of markets, competition and innovation are the dominant trends. By the tradition of increasing the productivity of agro-systems through generic, universally applicable technologies, innovation in the agrichemical industry is limited. The marketing of technology of agriculture needs to deal with some various trends such as locally-organized forces that envision regionalized sustainable agriculture in the future. Agricultural production has changed dramatically over the past century. Before World War second agricultural production was featured as a low input of money, high labor, mixed farming and low yields. Although mineral fertilizers were applied already in the second half of the 19th century, most f the crops were restricted by local climatic, geological and ecological conditions. After World War second, in the period of reconstruction, political and socioeconomic pressure changed the nature of agricultural production. For a growing population, food security at low prices and securing farmer income at acceptable levels became political priorities. Current agricultural policy the new European common agricultural policy is aimed to reduce overproduction, liberalization of world trade and the protection of landscape and natural habitats. Farmers have to increase the quality of their productivity and they have to control costs because of increased competition from the world market. Pesticides should be more effective at lower application doses, less toxic and not pose a threat to groundwater. There is a big debate taking place about how and whether to mitigate the intensive use of pesticides. This debate is about the future of agriculture which is sustainable agriculture. This is possible by moving away from conventional agriculture. Conventional agriculture is featured as high inputs and high yields. The use of pesticides in conventional agriculture implies crop production in a wide range. To move away from conventional agriculture is possible through the gradual adoption of less disturbing and polluting agricultural practices at the level of the cropping system. For a healthy environment for crop production in the future there is a need for the maintenance of chemical, physical or biological properties. There is also required to minimize the emission of volatile compounds in the atmosphere. Companies are limiting themselves to a particular interpretation of sustainable development, characterized by technological optimism and production-maximizing. So the main objective of the paper will present the trends in the pesticide industry and in agricultural production in the era of Globalization. The second objective is to analyze sustainable agriculture. Companies of pesticides seem to have identified biotechnology as a promising alternative and supplement to the conventional business of selling pesticides. The agricultural sector is in the process of transforming its conventional mode of operation. Some experts give suggestions to farmers to move towards precision farming and some suggest engaging in organic farming. The methodology of the paper will be historical and analytical. Both primary and secondary sources will be used.Keywords: globalization, pesticides, sustainable development, organic farming
Procedia PDF Downloads 98300 Preparation of Metallic Nanoparticles with the Use of Reagents of Natural Origin
Authors: Anna Drabczyk, Sonia Kudlacik-Kramarczyk, Dagmara Malina, Bozena Tyliszczak, Agnieszka Sobczak-Kupiec
Abstract:
Nowadays, nano-size materials are very popular group of materials among scientists. What is more, these materials find an application in a wide range of various areas. Therefore constantly increasing demand for nanomaterials including metallic nanoparticles such as silver of gold ones is observed. Therefore, new routes of their preparation are sought. Considering potential application of nanoparticles, it is important to select an adequate methodology of their preparation because it determines their size and shape. Among the most commonly applied methods of preparation of nanoparticles chemical and electrochemical techniques are leading. However, currently growing attention is directed into the biological or biochemical aspects of syntheses of metallic nanoparticles. This is associated with a trend of developing of new routes of preparation of given compounds according to the principles of green chemistry. These principles involve e.g. the reduction of the use of toxic compounds in the synthesis as well as the reduction of the energy demand or minimization of the generated waste. As a result, a growing popularity of the use of such components as natural plant extracts, infusions or essential oils is observed. Such natural substances may be used both as a reducing agent of metal ions and as a stabilizing agent of formed nanoparticles therefore they can replace synthetic compounds previously used for the reduction of metal ions or for the stabilization of obtained nanoparticles suspension. Methods that proceed in the presence of previously mentioned natural compounds are environmentally friendly and proceed without the application of any toxic reagents. Methodology: Presented research involves preparation of silver nanoparticles using selected plant extracts, e.g. artichoke extract. Extracts of natural origin were used as reducing and stabilizing agents at the same time. Furthermore, syntheses were carried out in the presence of additional polymeric stabilizing agent. Next, such features of obtained suspensions of nanoparticles as total antioxidant activity as well as content of phenolic compounds have been characterized. First of the mentioned studies involved the reaction with DPPH (2,2-Diphenyl-1-picrylhydrazyl) radical. The content of phenolic compounds was determined using Folin-Ciocalteu technique. Furthermore, an essential issue was also the determining of the stability of formed suspensions of nanoparticles. Conclusions: In the research it was demonstrated that metallic nanoparticles may be obtained using plant extracts or infusions as stabilizing or reducing agent. The methodology applied, i.e. a type of plant extract used during the synthesis, had an impact on the content of phenolic compounds as well as on the size and polydispersity of obtained nanoparticles. What is more, it is possible to prepare nano-size particles that will be characterized by properties desirable from the viewpoint of their potential application and such an effect may be achieved with the use of non-toxic reagents of natural origin. Furthermore, proposed methodology stays in line with the principles of green chemistry.Keywords: green chemistry principles, metallic nanoparticles, plant extracts, stabilization of nanoparticles
Procedia PDF Downloads 125299 Revolutionary Wastewater Treatment Technology: An Affordable, Low-Maintenance Solution for Wastewater Recovery and Energy-Saving
Authors: Hady Hamidyan
Abstract:
As the global population continues to grow, the demand for clean water and effective wastewater treatment becomes increasingly critical. By 2030, global water demand is projected to exceed supply by 40%, driven by population growth, increased water usage, and climate change. Currently, about 4.2 billion people lack access to safely managed sanitation services. The wastewater treatment sector faces numerous challenges, including the need for energy-efficient solutions, cost-effectiveness, ease of use, and low maintenance requirements. This abstract presents a groundbreaking wastewater treatment technology that addresses these challenges by offering an energy-saving approach, wastewater recovery capabilities, and a ready-made, affordable, and user-friendly package with minimal maintenance costs. The unique design of this ready-made package made it possible to eliminate the need for pumps, filters, airlift, and other common equipment. Consequently, it enables sustainable wastewater treatment management with exceptionally low energy and cost requirements, minimizing investment and maintenance expenses. The operation of these packages is based on continuous aeration, which involves injecting oxygen gas or air into the aeration chamber through a tubular diffuser with very small openings. This process supplies the necessary oxygen for aerobic bacteria. The recovered water, which amounts to almost 95% of the input, can be treated to meet specific quality standards, allowing safe reuse for irrigation, industrial processes, or even potable purposes. This not only reduces the strain on freshwater resources but also provides economic benefits by offsetting the costs associated with freshwater acquisition and wastewater discharge. The ready-made, affordable, and user-friendly nature of this technology makes it accessible to a wide range of users, including small communities, industries, and decentralized wastewater treatment systems. The system incorporates user-friendly interfaces, simplified operational procedures, and integrated automation, facilitating easy implementation and operation. Additionally, the use of durable materials, efficient equipment, and advanced monitoring systems significantly reduces maintenance requirements, resulting in low overall life-cycle costs and alleviating the burden on operators and maintenance personnel. In conclusion, the presented wastewater treatment technology offers a comprehensive solution to the challenges faced by the industry. Its energy-saving approach, combined with wastewater recovery capabilities, ensures sustainable resource management and enhances environmental stewardship. This affordable, ready-made, and low-maintenance package promotes broad adoption across various sectors and communities, contributing to a more sustainable future for water and wastewater management.Keywords: wastewater treatment, energy saving, wastewater recovery, affordable package, low maintenance costs, sustainable resource management, environmental stewardship
Procedia PDF Downloads 92298 Groundwater Arsenic Contamination in Gangetic Jharkhand, India: Risk Implications for Human Health and Sustainable Agriculture
Authors: Sukalyan Chakraborty
Abstract:
Arsenic contamination in groundwater has been a matter of serious concern worldwide. Globally, arsenic contaminated water has caused serious chronic human diseases and in the last few decades the transfer of arsenic to human beings via food chain has gained much attention because food represents a further potential exposure pathway to arsenic in instances where crops are irrigated with high arsenic groundwater, grown in contaminated fields or cooked with arsenic laden water. In the present study, the groundwater of Sahibganj district of Jharkhand has been analysed to find the degree of contamination and its probable associated risk due to direct consumption or irrigation. The present study area comprising of three blocks, namely Sahibganj, Rajmahal and Udhwa in Sahibganj district of Jharkhand state, India, situated in the western bank of river Ganga has been investigated for arsenic contamination in groundwater, soil and crops predominantly growing in the region. Associated physicochemical parameters of groundwater including pH, temperature, electrical conductivity (EC), total dissolved solids (TDS), dissolved oxygen (DO), oxidation reduction potential (ORP), ammonium, nitrate and chloride were assessed to understand the mobilisation mechanism and chances of arsenic exposure from soil to crops and further into the food chain. Results suggested the groundwater to be dominantly Ca-HCO3- type with low redox potential and high total dissolved solids load. Major cations followed the order of Ca ˃ Na ˃ Mg ˃ K. The concentration of major anions was found in the order of HCO3− > Cl− > SO42− > NO3− > PO43− varied between 0.009 to 0.20 mg L-1. Fe concentrations of the groundwater samples were below WHO permissible limit varying between 54 to 344 µg L-1. Phosphate concentration was high and showed a significant positive correlation with arsenic. As concentrations ranged from 7 to 115 µg L-1 in premonsoon, between 2 and 98 µg L-1 in monsoon and 1 to 133µg L-1 in postmonsoon season. Arsenic concentration was found to be much higher than the WHO or BIS permissible limit in majority of the villages in the study area. Arsenic was also seen to be positively correlated with iron and phosphate. PCA results demonstrated the role of both geological condition and anthropogenic inputs to influence the water quality. Arsenic was also found to increase with depth up to 100 m from the surface. Calculation of carcinogenic and non-carcinogenic effects of the arsenic concentration in the communities exposed to the groundwater for drinking and other purpose indicated high risk with an average of more than 1 in a 1000 population. Health risk analysis revealed high to very high carcinogenic and non-carcinogenic risk for adults and children in the communities dependent on groundwater of the study area. Observation suggested the groundwater to be considerably polluted with arsenic and posing significant health risk for the exposed communities. The mobilisation mechanism of arsenic also could be identified from the results suggesting reductive dissolution of Fe oxyhydroxides due to high phosphate concentration from agricultural input arsenic release from the sediments along river Ganges.Keywords: arsenic, physicochemical parameters, mobilisation, health effects
Procedia PDF Downloads 228297 Assessment of Taiwan Railway Occurrences Investigations Using Causal Factor Analysis System and Bayesian Network Modeling Method
Authors: Lee Yan Nian
Abstract:
Safety investigation is different from an administrative investigation in that the former is conducted by an independent agency and the purpose of such investigation is to prevent accidents in the future and not to apportion blame or determine liability. Before October 2018, Taiwan railway occurrences were investigated by local supervisory authority. Characteristics of this kind of investigation are that enforcement actions, such as administrative penalty, are usually imposed on those persons or units involved in occurrence. On October 21, 2018, due to a Taiwan Railway accident, which caused 18 fatalities and injured another 267, establishing an agency to independently investigate this catastrophic railway accident was quickly decided. The Taiwan Transportation Safety Board (TTSB) was then established on August 1, 2019 to take charge of investigating major aviation, marine, railway and highway occurrences. The objective of this study is to assess the effectiveness of safety investigations conducted by the TTSB. In this study, the major railway occurrence investigation reports published by the TTSB are used for modeling and analysis. According to the classification of railway occurrences investigated by the TTSB, accident types of Taiwan railway occurrences can be categorized into: derailment, fire, Signal Passed at Danger and others. A Causal Factor Analysis System (CFAS) developed by the TTSB is used to identify the influencing causal factors and their causal relationships in the investigation reports. All terminologies used in the CFAS are equivalent to the Human Factors Analysis and Classification System (HFACS) terminologies, except for “Technical Events” which was added to classify causal factors resulting from mechanical failure. Accordingly, the Bayesian network structure of each occurrence category is established based on the identified causal factors in the CFAS. In the Bayesian networks, the prior probabilities of identified causal factors are obtained from the number of times in the investigation reports. Conditional Probability Table of each parent node is determined from domain experts’ experience and judgement. The resulting networks are quantitatively assessed under different scenarios to evaluate their forward predictions and backward diagnostic capabilities. Finally, the established Bayesian network of derailment is assessed using investigation reports of the same accident which was investigated by the TTSB and the local supervisory authority respectively. Based on the assessment results, findings of the administrative investigation is more closely tied to errors of front line personnel than to organizational related factors. Safety investigation can identify not only unsafe acts of individual but also in-depth causal factors of organizational influences. The results show that the proposed methodology can identify differences between safety investigation and administrative investigation. Therefore, effective intervention strategies in associated areas can be better addressed for safety improvement and future accident prevention through safety investigation.Keywords: administrative investigation, bayesian network, causal factor analysis system, safety investigation
Procedia PDF Downloads 123296 The Effect of Online Analyzer Malfunction on the Performance of Sulfur Recovery Unit and Providing a Temporary Solution to Reduce the Emission Rate
Authors: Hamid Reza Mahdipoor, Mehdi Bahrami, Mohammad Bodaghi, Seyed Ali Akbar Mansoori
Abstract:
Nowadays, with stricter limitations to reduce emissions, considerable penalties are imposed if pollution limits are exceeded. Therefore, refineries, along with focusing on improving the quality of their products, are also focused on producing products with the least environmental impact. The duty of the sulfur recovery unit (SRU) is to convert H₂S gas coming from the upstream units to elemental sulfur and minimize the burning of sulfur compounds to SO₂. The Claus process is a common process for converting H₂S to sulfur, including a reaction furnace followed by catalytic reactors and sulfur condensers. In addition to a Claus section, SRUs usually consist of a tail gas treatment (TGT) section to decrease the concentration of SO₂ in the flue gas below the emission limits. To operate an SRU properly, the flow rate of combustion air to the reaction furnace must be adjusted so that the Claus reaction is performed according to stoichiometry. Accurate control of the air demand leads to an optimum recovery of sulfur during the flow and composition fluctuations in the acid gas feed. Therefore, the major control system in the SRU is the air demand control loop, which includes a feed-forward control system based on predetermined feed flow rates and a feed-back control system based on the signal from the tail gas online analyzer. The use of online analyzers requires compliance with the installation and operation instructions. Unfortunately, most of these analyzers in Iran are out of service for different reasons, like the low importance of environmental issues and a lack of access to after-sales services due to sanctions. In this paper, an SRU in Iran was simulated and calibrated using industrial experimental data. Afterward, the effect of the malfunction of the online analyzer on the performance of SRU was investigated using the calibrated simulation. The results showed that an increase in the SO₂ concentration in the tail gas led to an increase in the temperature of the reduction reactor in the TGT section. This increase in temperature caused the failure of TGT and increased the concentration of SO₂ from 750 ppm to 35,000 ppm. In addition, the lack of a control system for the adjustment of the combustion air caused further increases in SO₂ emissions. In some processes, the major variable cannot be controlled directly due to difficulty in measurement or a long delay in the sampling system. In these cases, a secondary variable, which can be measured more easily, is considered to be controlled. With the correct selection of this variable, the main variable is also controlled along with the secondary variable. This strategy for controlling a process system is referred to as inferential control" and is considered in this paper. Therefore, a sensitivity analysis was performed to investigate the sensitivity of other measurable parameters to input disturbances. The results revealed that the output temperature of the first Claus reactor could be used for inferential control of the combustion air. Applying this method to the operation led to maximizing the sulfur recovery in the Claus section.Keywords: sulfur recovery, online analyzer, inferential control, SO₂ emission
Procedia PDF Downloads 75295 A Mixed-Methods Design and Implementation Study of ‘the Attach Project’: An Attachment-Based Educational Intervention for Looked after Children in Northern Ireland
Authors: Hannah M. Russell
Abstract:
‘The Attach Project’ (TAP), is an educational intervention aimed at improving educational and socio-emotional outcomes for children who are looked after. TAP is underpinned by Attachment Theory and is adapted from Dyadic Developmental Psychotherapy (DDP), which is a treatment for children and young people impacted by complex trauma and disorders of attachment. TAP has been implemented in primary schools in Northern Ireland throughout the 2018/19 academic year. During this time, a design and implementation study has been conducted to assess the promise of effectiveness for the future dissemination and ‘scaling-up’ of the programme for a larger, randomised control trial. TAP has been designed specifically for implementation in a school setting and is comprised of a whole school element and a more individualised Key Adult-Key Child pairing. This design and implementation study utilises a mixed-methods research design consisting of quantitative, qualitative, and observational measures with stakeholder input and involvement being considered an integral component. The use of quantitative measures, such as self-report questionnaires prior to and eight months following the implementation of TAP, enabled the analysis of the strengths and direction of relations between the various components of the programme, as well as the influence of implementation factors. The use of qualitative measures, incorporating semi-structured interviews and focus groups, enabled the assessment of implementation factors, identification of implementation barriers, and potential methods of addressing these issues. Observational measures facilitated the continual development and improvement of ‘TAP training’ for school staff. Preliminary findings have provided evidence of promise for the effectiveness of TAP and indicate the potential benefits of introducing this type of attachment-based intervention across other educational settings. This type of intervention could benefit not only children who are looked after but all children who may be impacted by complex trauma or disorders of attachment. Furthermore, findings from this study demonstrate that it is possible for children to form a secondary attachment relationship with a significant adult in school. However, various implementation factors which should be addressed were identified throughout the study, such as the necessity of protected time being introduced to facilitate the development of a positive Key Adult- Key Child relationship. Furthermore, additional ‘re-cap’ training is required in future dissemination of the programme, to maximise ‘attachment friendly practice’ in the whole staff team. Qualitative findings have also indicated that there is a general opinion across school staff that this type of Key Adult- Key Child pairing could be more effective if it was introduced as soon as children begin primary school. This research has provided ample evidence for the need to introduce relationally based interventions in schools, to help to ensure that children who are looked after, or who are impacted by complex trauma or disorders of attachment, can thrive in the school environment. In addition, this research has facilitated the identification of important implementation factors and barriers to implementation, which can be addressed prior to the ‘scaling-up’ of TAP for a robust, randomised controlled trial.Keywords: attachment, complex trauma, educational interventions, implementation
Procedia PDF Downloads 194