Search results for: - formal and informal assessments
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1812

Search results for: - formal and informal assessments

222 Understanding the Impact of Out-of-Sequence Thrust Dynamics on Earthquake Mitigation: Implications for Hazard Assessment and Disaster Planning

Authors: Rajkumar Ghosh

Abstract:

Earthquakes pose significant risks to human life and infrastructure, highlighting the importance of effective earthquake mitigation strategies. Traditional earthquake modelling and mitigation efforts have largely focused on the primary fault segments and their slip behaviour. However, earthquakes can exhibit complex rupture dynamics, including out-of-sequence thrust (OOST) events, which occur on secondary or subsidiary faults. This abstract examines the impact of OOST dynamics on earthquake mitigation strategies and their implications for hazard assessment and disaster planning. OOST events challenge conventional seismic hazard assessments by introducing additional fault segments and potential rupture scenarios that were previously unrecognized or underestimated. Consequently, these events may increase the overall seismic hazard in affected regions. The study reviews recent case studies and research findings that illustrate the occurrence and characteristics of OOST events. It explores the factors contributing to OOST dynamics, such as stress interactions between fault segments, fault geometry, and mechanical properties of fault materials. Moreover, it investigates the potential triggers and precursory signals associated with OOST events to enhance early warning systems and emergency response preparedness. The abstract also highlights the significance of incorporating OOST dynamics into seismic hazard assessment methodologies. It discusses the challenges associated with accurately modelling OOST events, including the need for improved understanding of fault interactions, stress transfer mechanisms, and rupture propagation patterns. Additionally, the abstract explores the potential for advanced geophysical techniques, such as high-resolution imaging and seismic monitoring networks, to detect and characterize OOST events. Furthermore, the abstract emphasizes the practical implications of OOST dynamics for earthquake mitigation strategies and urban planning. It addresses the need for revising building codes, land-use regulations, and infrastructure designs to account for the increased seismic hazard associated with OOST events. It also underscores the importance of public awareness campaigns to educate communities about the potential risks and safety measures specific to OOST-induced earthquakes. This sheds light on the impact of out-of-sequence thrust dynamics in earthquake mitigation. By recognizing and understanding OOST events, researchers, engineers, and policymakers can improve hazard assessment methodologies, enhance early warning systems, and implement effective mitigation measures. By integrating knowledge of OOST dynamics into urban planning and infrastructure development, societies can strive for greater resilience in the face of earthquakes, ultimately minimizing the potential for loss of life and infrastructure damage.

Keywords: earthquake mitigation, out-of-sequence thrust, seismic, satellite imagery

Procedia PDF Downloads 68
221 A Complex Network Approach to Structural Inequality of Educational Deprivation

Authors: Harvey Sanchez-Restrepo, Jorge Louca

Abstract:

Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.

Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics

Procedia PDF Downloads 106
220 Stimulating Effects of Media in Improving Quality of Distance Education: A Literature Based Study

Authors: Tahzeeb Mahreen

Abstract:

Distance education refers to giving instruction in which students are remote from the institution and once in a while go to formal demonstration classes, and teaching sessions. Segments of media, for example, radio, TV, PC and Internet and so on are the assets and method for correspondence being utilized as a part of learning material by many open and distance learning institutions. Media has a great part in maximizing the learning opportunities thus enabling distance education, a mode of increased literacy rate of the country. This study goes for analyzing how media had affected distance education through its different mediums. The objectives of the study were (i) to determine the direct impact of media on distance education? (ii) To know how media effects distance education pedagogy (iii) To find out how media works to increase student’s achievement. Literature-based methodology was used, and books, peer-reviewed articles, press reports and internet-based materials were studied as a result. By using descriptive qualitative research analysis, the researcher has interpreted that distance education programs are progressively utilizing mixes of media to convey training that has a positive impact on learning along with a few challenges. In addition, the perception of the researcher varied depending on the programs of distance learning but generally believed that electronic media were moderately more supportive in enhancing the overall performance of the learners. It was concluded that the intellectual style, identity qualities, and self-expectations are the three primary enhanced areas in a student’s educational life in distance education programs. It was portrayed that a comprehension of how individual learners approach learning may make it workable for the distance educator to see an example of learning styles and arrange or modify course presentations through media. Moreover, it is noticed that teaching in distance education address the developing role of the instructor, the requirement for diminishing resistance as conventional teachers utilize remove conveyance frameworks lastly, staff state of mind toward the utilization of innovation. Furthermore, the results showed that media had assumed its part to make distance learning educators more dynamic, capable and concerned about their individual works. The study also indicated a high positive relationship between the media available at study centers and media used by the distance education. The challenge pointed out by the researcher was the clash of distance and time with communication as the life situations of every learner are varied. Recommendations included the realization of the duty of distance learning instructor to help students understand the effective use of media for their study lessons and also to develop online learning communities to be in instant connection with the students.

Keywords: distance education, education, media, teaching and learning

Procedia PDF Downloads 125
219 The Acquisition of Spanish L4 by Learners with Croatian L1, English L2 and Italian L3

Authors: Barbara Peric

Abstract:

The study of acquiring a third and additional language has garnered significant focus within second language acquisition (SLA) research. Initially, it was commonly viewed as merely an extension of second language acquisition (SLA). However, in the last two decades, numerous researchers have emphasized the need to recognize the unique characteristics of third language acquisition (TLA). This recognition is crucial for understanding the intricate cognitive processes that arise from the interaction of more than two linguistic systems in the learner's mind. This study investigates cross-linguistic influences in the acquisition of Spanish as a fourth language by students who have Croatian as a first language (L1). English as a second language (L2), and Italian as a third language (L3). Observational data suggests that influence or transfer of linguistic elements can arise not only from one's native language (L1) but also from non-native languages. This implies that, for individuals proficient in multiple languages, the native language doesn't consistently hold a superior position. Instead, it should be examined alongside other potential sources of linguistic transfer. Earlier studies have demonstrated that high proficiency in a second language can significantly impact cross-linguistic influences when acquiring a third and additional language. Among the extensively examined factors, the typological relationship stands out as one of the most scrutinized variables. The goal of the present study was to explore whether language typology and formal similarity or proficiency in the second language had a more significant impact on L4 acquisition. Participants in this study were third-year undergraduate students at Rochester Institute of Technology’s subsidiary in Croatia (RIT Croatia). All the participants had exclusively Croatian as L1, English as L2, Italian as L3 and were learning Spanish as L4 at the time of the study. All the participants had a high level of proficiency in English and low level of proficiency in Italian. Based on the error analysis the findings indicate that for some types of lexical errors such as coinage, language typology had a more significant impact and Italian language was the preferred source of transfer despite the law proficiency in that language. For some other types of lexical errors, such as calques, second language proficiency had a more significant impact, and English language was the preferred source of transfer. On the other hand, Croatian, Italian, and Spanish are more similar in the area of morphology due to higher degree of inflection compared to English and the strongest influence of the Croatian language was precisely in the area of morphology. The results emphasize the need to consider linguistic resemblances between the native language (L1) and the third and additional language as well as the learners' proficiency in the second language when developing successful teaching strategies for acquiring the third and additional language. These conclusions add to the expanding knowledge in the realm of Second Language Acquisition (SLA) and offer practical insights for language educators aiming to enhance the effectiveness of learning experiences in acquiring a third and additional language.

Keywords: third and additional language acquisition, cross-linguistic influences, language proficiency, language typology

Procedia PDF Downloads 33
218 A Risk-Based Modeling Approach for Successful Adoption of CAATTs in Audits: An Exploratory Study Applied to Israeli Accountancy Firms

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

Technology adoption models are extensively used in the literature to explore drivers and inhibitors affecting the adoption of Computer Assisted Audit Techniques and Tools (CAATTs). Further studies from recent years suggested additional factors that may affect technology adoption by CPA firms. However, the adoption of CAATTs by financial auditors differs from the adoption of technologies in other industries. This is a result of the unique characteristics of the auditing process, which are expressed in the audit risk elements and the risk-based auditing approach, as encoded in the auditing standards. Since these audit risk factors are not part of the existing models that are used to explain technology adoption, these models do not fully correspond to the specific needs and requirements of the auditing domain. The overarching objective of this qualitative research is to fill the gap in the literature, which exists as a result of using generic technology adoption models. Followed by a pretest and based on semi-structured in-depth interviews with 16 Israeli CPA firms of different sizes, this study aims to reveal determinants related to audit risk factors that influence the adoption of CAATTs in audits and proposes a new modeling approach for the successful adoption of CAATTs. The findings emphasize several important aspects: (1) while large CPA firms developed their own inner guidelines to assess the audit risk components, other CPA firms do not follow a formal and validated methodology to evaluate these risks; (2) large firms incorporate a variety of CAATTs, including self-developed advanced tools. On the other hand, small and mid-sized CPA firms incorporate standard CAATTs and still need to catch up to better understand what CAATTs can offer and how they can contribute to the quality of the audit; (3) the top management of mid-sized and small CPA firms should be more proactive and updated about CAATTs capabilities and contributions to audits; and (4) All CPA firms consider professionalism as a major challenge that must be constantly managed to ensure an optimal CAATTs operation. The study extends the existing knowledge of CAATTs adoption by looking at it from a risk-based auditing approach. It suggests a new model for CAATTs adoption by incorporating influencing audit risk factors that auditors should examine when considering CAATTs adoption. Since the model can be used in various audited scenarios and supports strategic, risk-based decisions, it maximizes the great potential of CAATTs on the quality of the audits. The results and insights can be useful to CPA firms, internal auditors, CAATTs developers and regulators. Moreover, it may motivate audit standard-setters to issue updated guidelines regarding CAATTs adoption in audits.

Keywords: audit risk, CAATTs, financial auditing, information technology, technology adoption models

Procedia PDF Downloads 50
217 Applicable Law to Intellectual and Industrial Property Agreements According to Turkish Private International Law and Rome I Regulation

Authors: Sema Cortoglu Koca

Abstract:

Intellectual and industrial property rules, have a substantial effect on the sustainable development. Intellectual and industrial property rights, as temporary privileges over the products of intellectual activity, determine the supervision of information and technology. The level and scope of intellectual property protection thus influence the flow of technology between developed and developing countries. In addition, intellectual and industrial property rights are based on the notion of balance. Since they are time-limited rights, they reconcile private and public benefits. That is, intellectual and industrial property rights respond to both private interests and public interests by rewarding innovators and by promoting the dissemination of ideas, respectively. Intellectual and industrial property rights can, therefore, be a tool for sustainable development. If countries can balance their private and public interests according to their particular context and circumstances, they can ensure the intellectual and industrial property which promotes innovation and technology transfer relevant for them. People, enterprises and countries who need technology, can transfer developed technology which is acquired by people, enterprises and countries so as to decrease their technological necessity and improve their technology. Because of the significance of intellectual and industrial property rights on the technology transfer law as mentioned above, this paper is confined to intellectual and industrial property agreements especially technology transfer contracts. These are license contract, know-how contract, franchise agreement, joint venture agreement, management agreement, research and development agreement. In Turkey, technology transfer law is still a developing subject. For developing countries, technology transfer regulations are very important for their private international law because these countries do not know which technology transfer law is applicable when conflicts arise. In most technology transfer contracts having international elements, the parties choose a law to govern their contracts. Where the parties do not choose a law, either expressly or impliedly, and matters which is not excluded in party autonomy, the court has to determine the applicable law to contracts in a matter of capacity, material, the formal and essential validity of contracts. For determining the proper law of technology transfer contracts, it is tried to build a rule for applying all technology transfer contracts. This paper is confined to the applicable law to intellectual and industrial property agreements according to ‘5718 Turkish Act on Private International Law and Civil Procedure’ and ‘Regulation (EC) No 593/2008 of the European Parliament and of the Council of 17 June 2008 on the law applicable to contractual obligations (Rome I)’. Like these complex contracts, to find a rule can be really difficult. We can arrange technology transfer contracts in groups, and we can determine the rule and connecting factors to these groups. For the contracts which are not included in these groups, we can determine a special rule considering the characteristics of the contract.

Keywords: intellectual and industrial property agreements, Rome I regulation, technology transfer, Turkish act on private international law and civil procedure

Procedia PDF Downloads 135
216 Preceptor Program: A Way to Reduce Absconding Rate and Increase Patient Satisfaction

Authors: Akanksha Dicholkar, Celin Jacob, Omkar More

Abstract:

Work force instability, as demonstrated by high rates of staff turnover and lingering vacancy rates, continues to be a major challenge faced by health care organizations. The impact is manifested in workflow inefficiencies, delays in delivering patient care, and dissatisfaction among patients and staff, all of which can have significant negative effects on quality of care and patient safety. In addition, the staggering administrative costs created by a transient work force threaten health care organizations financial viability. One nurse retention strategy is to have newly hired nurses partake in Preceptorship. Precepting is a way to enculturate new employees into their role. Also good professional, collegial relationship between an experienced nurse and a newly hired nurse relations was evidenced. This study demonstrates impact of preceptor program on absconding rate, employee satisfaction & Patient satisfaction. Purpose of study: To decrease absconding rate. Objective: 1. To reduce the high absconding rate among nurses in Aster Medcity (AMC). 2. To facilitate the acclimatization of the newly hired nurse into their role, focusing on professional growth, inter-professional relationships and clinical skills required for the job. Methodology: Descriptive study by Convenience sampling method and collect data by direct observation, questionnaire, interviews. Sample size as per Sample size statistical table at 95 % CI. We conducted a pre and post intervention analysis to assess the impact of Preceptorship at AMC, with a daily occupancy of approx. 300 patients. Result: Preceptor program has had a significant improvement positive impact on all measured parameters. Absconding rate came down from 20% to 0% (P= 0.001). Patient satisfaction scores rose from 85% to 95%. Employee satisfaction rose form 65% to 85%. Conclusion: The project proved that Preceptor Development Programme and the steps taken in hand holding of the new joinees were effective in reducing the absconding rate among nurses and improved the overall satisfaction of new nurses. Preceptee satisfaction with the preceptorship experience was correlated with favorable evaluation of the relationship between the preceptee and preceptor. These findings indicate that when preceptors and preceptees have the benefit of formal preceptorship programs that are well supported, and when the preceptors’ efforts are rewarded, satisfaction is enhanced for both participants, preceptor commitment to the role is reinforced.

Keywords: absconding rate, preceptor, employee satisfaction index, satisfaction index

Procedia PDF Downloads 284
215 Women's Entrepreneurship in Mena Region: Gem Key Learnings

Authors: Fatima Boutaleb

Abstract:

Entrepreneurship proves to be crucial for the economic growth and development, since it contributes to job creation and the improvement of the overall productivity thus generating a positive impact upon society at various levels. Promoting entrepreneurship stimulates therefore economic diversity that is key to the betterment and/or maintaining of the standard of living. In fact, recent research suggests that entrepreneurship contributes to development by creating businesses and jobs, stimulating innovation, creating social capital across borders, and channeling political and financial capital. However, different research studies indicate that among the main factors impeding the entrepreneurship are politico-economic as socio-cultural problems, with an intensity for those related to young people and to women. In the MENA region, discrimination inherent in gender is alarming: Only one woman in eight runs her own business against 1 in 3 men. In most countries, young women and young men are facing problems involving access to finance, inadequate infrastructure, lack of support and, in general, an ecosystem that is rather unfavorable. According to the International Labor Organization, North Africa and the Middle East has the highest unemployment rate in all other regions of the world. In other hand, nearly a quarter of the population under 30 is unemployed and youth unemployment costs more than $40 billion each year to the region. In the current context, the situations in the Middle East and North Africa region are singular, both in terms of demographic trends and socio-economic issues around the employment of a large and better trained youth, but still strongly affected by unemployment and under-employment. According to a study published in 2015 by McKinsey, the world gain 26% of additional GDP (47% in the MENA region), more than 28 trillion dollars by 2025, if women came to participate, as well as men, to the economy. Promoting entrepreneurship represents an excellent alternative for the countries whose productive fabric fails to integrate the contingent of young people entering the job market each year. The MENA region, presenting entrepreneurial activity rates below those of other regions in terms of comparable development, has undoubtedly leeway at this level, even though the region displays large national heterogeneity, namely in the priority given to the promotion of entrepreneurship. The objective of this article is therefore to examine the women entrepreneurial vocation in the MENA region, to see to what extent research on the determinant of gender can provide information on the trend of the emerging entrepreneurial activity whether driven by necessity or by opportunity and, on this basis, to submit public policy proposals for the improvement of the mechanisms of inclusion among the youth women people. The objective is not to analyze the causality models but rather to identify the entrepreneurial construct specific to the MENA region via the analysis of GEM data from 2017 to 2019 among adults belonging to 10 countries of the MENA region. Notably, the study shows that inclusion of young women may be enhanced. These disadvantaged segments frequently intend to become entrepreneurs, but they tend not to enact their vocational intentions.

Keywords: economic development, entrepreneurial activity, GEM, gender, informal sector

Procedia PDF Downloads 83
214 Cultural Statistics in Governance: A Comparative Analysis between the UK and Finland

Authors: Sandra Toledo

Abstract:

There is an increasing tendency in governments for a more evidence-based policy-making and a stricter auditing of public spheres. Especially when budgets are tight, and taxpayers demand a bigger scrutiny over the use of the available resources, statistics and numbers appeared as an effective tool to produce data that supports investments done, as well as evaluating public policy performance. This pressure has not exempted the cultural and art fields. Finland like the rest of Nordic countries has kept its principles from the welfare state, whilst UK seems to be going towards the opposite direction, relaying more and more in private sectors and foundations, as the state folds back. The boom of the creative industries along with a managerial trend introduced by Tatcher in the UK brought, as a result, a commodification of arts within a market logic, where sponsorship and commercial viability were the keynotes. Finland on its part, in spite of following a more protectionist approach of arts, seems to be heading in a similar direction. Additionally, there is an international growing interest in the application of cultural participation studies and the comparability between countries in their results. Nonetheless, the standardization in the application of cultural surveys has not happened yet. Not only there are differences in the application of these type of surveys in terms of time and frequency, but also regarding those conducting them. Therefore, one hypothesis considered in this research is that behind the differences between countries in the application of cultural surveys, production and utilization of cultural statistics is the cultural policy model adopted by the government. In other words, the main goal of this research is to answer the following: What are the differences and similarities between Finland and the UK regarding the role cultural surveys have in cultural policy making? Along with other secondary questions such as: How does the cultural policy model followed by each country influence the role of cultural surveys in cultural policy making? and what are the differences at the local level? In order to answer these questions, strategic cultural policy documents and interviews with key informants will be used and analyzed as source data, using content analysis methods. Cultural statistics per se will not be compared, but instead their use as instruments of governing, and its relation to the cultural policy model. Aspects such as execution of cultural surveys, funding, periodicity, and use of statistics in formal reports and publications, will be studied in the written documents while in the interviews other elements such as perceptions from those involved in collecting cultural statistics or policy making, distribution of tasks and hierarchies among cultural and statistical institutions, and a general view will be the target. A limitation identified beforehand and that it is expected to encounter throughout the process is the language barrier in the case of Finland when it comes to official documents, which will be tackled by interviewing the authors of such papers and choosing key extract of them for translation.

Keywords: Finland, cultural statistics, cultural surveys, United Kingdom

Procedia PDF Downloads 218
213 Biliteracy and Latinidad: Catholic Youth Group as a Site of Cosmopolitan Identity Building

Authors: Natasha Perez

Abstract:

This autobiographical narrative inquiry explores the relationship between religious practice, identity, language and literacy in the author’s life experience as a second-generation Cuban-American growing up in the bilingual spaces of South Florida. The author describes how the social practices around language, including the flexibility to communicate in English and Spanish simultaneously, known as translanguaging, were instrumental to developing a biliterate cosmopolitan identity, along with a greater sense of Latinidad through interactions with diverse Latinx church members. This narrative study involved cycles of writing, reading, and reflection within a three-dimensional narrative inquiry space in order to discover the ways in which language and literacy development in the relationship between the personal and the social, across time and space, as historically situated phenomena. The findings show that Catholic faith practices have always been a source and expression of Cuban-ness, a means of sustaining Cuban identity, as well as a medium for bilingual language and literacy practice in the author’s life. Despite lacking formal literacy education in Spanish, she benefitted from the Catholic Church’s response to the surge of Spanish-speaking immigrants in South Florida in the 1980s and the subsequent flexibility of language practice in church-sponsored youth groups. The faith-sharing practices of the youth group created a space to use Spanish in more sophisticated ways that served to build confidence as a bilingual speaker and expand bilingual competence. These experiences also helped the author develop a more salient identity as Cuban-American and a deeper connection to her Cuban-ness in relation to the Nicaraguan, Venezuelan, and first-generation Cuban identities of my peers. The youth group also fostered cosmopolitan identity building through interactions with pan-ethnic Spanish speakers, with Catholicism as a common language and culture that served as a uniting force. Interaction with these peers also fostered cosmopolitan understandings that deepened the author’s knowledge of the geographical boundaries, political realities, and socio-historical differences between these groups of immigrants. This narrative study opens a window onto the micro-processes and socio-cultural dynamics of language and identity development in the second generation, with the potential to deepen our understanding of the impact of religious practice on these.

Keywords: literacy, religion, identity, comopolitanism, culture, language, translanguaging

Procedia PDF Downloads 77
212 A development of Innovator Teachers Training Curriculum to Create Instructional Innovation According to Active Learning Approach to Enhance learning Achievement of Private School in Phayao Province

Authors: Palita Sooksamran, Katcharin Mahawong

Abstract:

This research aims to offer the development of innovator teachers training curriculum to create instructional innovation according to active learning approach to enhance learning achievement. The research and development process is carried out in 3 steps: Step 1 The study of the needs necessary to develop a training curriculum: the inquiry was conducted by a sample of teachers in private schools in Phayao province that provide basic education at the level of education. Using a questionnaire of 176 people, the sample was defined using a table of random numbers and stratified samples, using the school as a random layer. Step 2 Training curriculum development: the tools used are developed training curriculum and curriculum assessments, with nine experts checking the appropriateness of the draft curriculum. The statistic used in data analysis is the average ( ) and standard deviation (S.D.) Step 3 study on effectiveness of training curriculum: one group pretest/posttest design applied in this study. The sample consisted of 35 teachers from private schools in Phayao province. The participants volunteered to attend on their own. The results of the research showed that: 1.The essential demand index needed with the list of essential needs in descending order is the choice and create of multimedia media, videos, application for learning management at the highest level ,Developed of multimedia, video and applications for learning management and selection of innovative learning management techniques and methods of solve the problem Learning , respectively. 2. The components of the training curriculum include principles, aims, scope of content, training activities, learning materials and resources, supervision evaluation. The scope of the curriculum consists of basic knowledge about learning management innovation, active learning, lesson plan design, learning materials and resources, learning measurement and evaluation, implementation of lesson plans into classroom and supervision and motoring. The results of the evaluation of quality of the draft training curriculum at the highest level. The Experts suggestion is that the purpose of the course should be used words that convey the results. 3. The effectiveness of training curriculum 1) Cognitive outcomes of the teachers in creating innovative learning management was at a high level of relative gain score. 2) The assessment results of learning management ability according to the active learning approach to enhance learning achievement by assessing from 2 education supervisor as a whole were very high , 3) Quality of innovation learning management based on active learning approach to enhance learning achievement of the teachers, 7 instructional Innovations were evaluated as outstanding works and 26 instructional Innovations passed the standard 4) Overall learning achievement of students who learned from 35 the sample teachers was at a high level of relative gain score 5) teachers' satisfaction towards the training curriculum was at the highest level.

Keywords: training curriculum, innovator teachers, active learning approach, learning achievement

Procedia PDF Downloads 37
211 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism

Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran

Abstract:

Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.

Keywords: CT PA, D dimer, pulmonary embolism, wells score

Procedia PDF Downloads 213
210 Economic Efficiency of Cassava Production in Nimba County, Liberia: An Output-Oriented Approach

Authors: Kollie B. Dogba, Willis Oluoch-Kosura, Chepchumba Chumo

Abstract:

In Liberia, many of the agricultural households cultivate cassava for either sustenance purposes, or to generate farm income. Many of the concentrated cassava farmers reside in Nimba, a north-eastern County that borders two other economies: the Republics of Cote D’Ivoire and Guinea. With a high demand for cassava output and products in emerging Asian markets coupled with an objective of the Liberia agriculture policies to increase the competitiveness of valued agriculture crops; there is a need to examine the level of resource-use efficiency for many agriculture crops. However, there is a scarcity of information on the efficiency of many agriculture crops, including cassava. Hence the study applying an output-oriented method seeks to assess the economic efficiency of cassava farmers in Nimba County, Liberia. A multi-stage sampling technique was employed to generate a sample for the study. From 216 cassava farmers, data related to on-farm attributes, socio-economic and institutional factors were collected. The stochastic frontier models, using the Translog functional forms, of production and revenue, were used to determine the level of revenue efficiency and its determinants. The result showed that most of the cassava farmers are male (60%). Many of the farmers are either married, engaged or living together with a spouse (83%), with a mean household size of nine persons. Farmland is prevalently obtained by inheritance (95%), average farm size is 1.34 hectares, and most cassava farmers did not access agriculture credits (76%) and extension services (91%). The mean cassava output per hectare is 1,506.02 kg, which estimates average revenue of L$23,551.16 (Liberian dollars). Empirical results showed that the revenue efficiency of cassava farmers varies from 0.1% to 73.5%; with the mean revenue efficiency of 12.9%. This indicates that on average, there is a vast potential of 87.1% to increase the economic efficiency of cassava farmers in Nimba by improving technical and allocative efficiencies. For the significant determinants of revenue efficiency, age and group membership had negative effects on revenue efficiency of cassava production; while farming experience, access to extension, formal education, and average wage rate have positive effects. The study recommends the setting-up and incentivizing of farmer field schools for cassava farmers to primarily share their farming experiences with others and to learn robust cultivation techniques of sustainable agriculture. Also, farm managers and farmers should consider a fix wage rate in labor contracts for all stages of cassava farming.

Keywords: economic efficiency, frontier production and revenue functions, Nimba County, Liberia, output-oriented approach, revenue efficiency, sustainable agriculture

Procedia PDF Downloads 112
209 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 58
208 Current Applications of Artificial Intelligence (AI) in Chest Radiology

Authors: Angelis P. Barlampas

Abstract:

Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.

Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses

Procedia PDF Downloads 49
207 Mapping the State of the Art of European Companies Doing Social Business at the Base of the Economic Pyramid as an Advanced Form of Strategic Corporate Social Responsibility

Authors: Claudio Di Benedetto, Irene Bengo

Abstract:

The objective of the paper is to study how large European companies develop social business (SB) at the base of the economic pyramid (BoP). BoP markets are defined as the four billions people living with an annual income below $3,260 in local purchasing power. Despite they are heterogeneous in terms of geographic range they present some common characteristics: the presence of significant unmet (social) needs, high level of informal economy and the so-called ‘poverty penalty’. As a result, most people living at BoP are excluded from the value created by the global market economy. But it is worth noting, that BoP population with an aggregate purchasing power of around $5 trillion a year, represent a huge opportunity for companies that want to enhance their long-term profitability perspective. We suggest that in this context, the development of SB is, for companies, an innovative and promising way to satisfy unmet social needs and to experience new forms of value creation. Indeed, SB can be considered a strategic model to develop CSR programs that fully integrate the social dimension into the business to create economic and social value simultaneously. Despite in literature many studies have been conducted on social business, only few have explicitly analyzed such phenomenon from a company perspective and their role in the development of such initiatives remains understudied with fragmented results. To fill this gap the paper analyzes the key characteristics of the social business initiatives developed by European companies at BoP. The study was performed analyzing 1475 European companies participating in the United Nation Global Compact, the world’s leading corporate social responsibility program. Through the analysis of the corporate websites the study identifies companies that actually do SB at BoP. For SB initiatives identified, information were collected according to a framework adapted from the SB model developed by preliminary results show that more than one hundred European companies have already implemented social businesses at BoP accounting for the 6,5% of the total. This percentage increases to 15% if the focus is on companies with more than 10.440 employees. In terms of geographic distribution 80% of companies doing SB at BoP are located in western and southern Europe. The companies more active in promoting SB belong to financial sector (20%), energy sector (17%) and food and beverage sector (12%). In terms of social needs addressed almost 30% of the companies develop SB to provide access to energy and WASH, 25% of companies develop SB to reduce local unemployment or to promote local entrepreneurship and 21% of companies develop SB to promote financial inclusion of poor. In developing SB companies implement different social business configurations ranging from forms of outsourcing to internal development models. The study identifies seven main configurations through which company develops social business and each configuration present distinguishing characteristics respect to the involvement of the company in the management, the resources provided and the benefits achieved. By performing different analysis on data collected the paper provides detailed insights on how European companies develop SB at BoP.

Keywords: base of the economic pyramid, corporate social responsibility, social business, social enterprise

Procedia PDF Downloads 211
206 Investigating the Neural Heterogeneity of Developmental Dyscalculia

Authors: Fengjuan Wang, Azilawati Jamaludin

Abstract:

Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.

Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity

Procedia PDF Downloads 39
205 State and Benefit: Delivering the First State of the Bays Report for Victoria

Authors: Scott Rawlings

Abstract:

Victoria’s first State of the Bays report is an historic baseline study of the health of Port Phillip Bay and Western Port. The report includes 50 assessments of 36 indicators across a broad array of topics from the nitrogen cycle and water quality to key marine species and habitats. This paper discusses the processes for determining and assessing the indicators and comments on future priorities identified to maintain and improve the health of these water ways. Victoria’s population is now at six million, and growing at a rate of over 100,000 people per year - the highest increase in Australia – and the population of greater Melbourne is over four million. Port Phillip Bay and Western Port are vital marine assets at the centre of this growth and will require adaptive strategies if they are to remain in good condition and continue to deliver environmental, economic and social benefits. In 2014, it was in recognition of these pressures that the incoming Victorian Government committed to reporting on the state of the bays every five years. The inaugural State of the Bays report was issued by the independent Victorian Commissioner for Environmental Sustainability. The report brought together what is known about both bays, based on existing research. It was a baseline on which future reports will build and, over time, include more of Victoria’s marine environment. Port Phillip Bay and Western Port generally demonstrate healthy systems. Specific threats linked to population growth are a significant pressure. Impacts are more significant where human activity is more intense and where nutrients are transported to the bays around the mouths of creeks and drainage systems. The transport of high loads of nutrients and pollutants to the bays from peak rainfall events is likely to increase with climate change – as will sea level rise. Marine pests are also a threat. More than 100 introduced marine species have become established in Port Phillip Bay and can compete with native species, alter habitat, reduce important fish stocks and potentially disrupt nitrogen cycling processes. This study confirmed that our data collection regime is better within the Marine Protected Areas of Port Phillip Bay than in other parts. The State of the Bays report is a positive and practical example of what can be achieved through collaboration and cooperation between environmental reporters, Government agencies, academic institutions, data custodians, and NGOs. The State of the Bays 2016 provides an important foundation by identifying knowledge gaps and research priorities for future studies and reports on the bays. It builds a strong evidence base to effectively manage the bays and support an adaptive management framework. The Report proposes a set of indicators for future reporting that will support a step-change in our approach to monitoring and managing the bays – a shift from reporting only on what we do know, to reporting on what we need to know.

Keywords: coastal science, marine science, Port Phillip Bay, state of the environment, Western Port

Procedia PDF Downloads 196
204 Lignin Valorization: Techno-Economic Analysis of Three Lignin Conversion Routes

Authors: Iris Vural Gursel, Andrea Ramirez

Abstract:

Effective utilization of lignin is an important mean for developing economically profitable biorefineries. Current literature suggests that large amounts of lignin will become available in second generation biorefineries. New conversion technologies will, therefore, be needed to carry lignin transformation well beyond combustion to produce energy, but towards high-value products such as chemicals and transportation fuels. In recent years, significant progress on catalysis has been made to improve transformation of lignin, and new catalytic processes are emerging. In this work, a techno-economic assessment of two of these novel conversion routes and comparison with more established lignin pyrolysis route were made. The aim is to provide insights into the potential performance and potential hotspots in order to guide the experimental research and ease the commercialization by early identifying cost drivers, strengths, and challenges. The lignin conversion routes selected for detailed assessment were: (non-catalytic) lignin pyrolysis as the benchmark, direct hydrodeoxygenation (HDO) of lignin and hydrothermal lignin depolymerisation. Products generated were mixed oxygenated aromatic monomers (MOAMON), light organics, heavy organics, and char. For the technical assessment, a basis design followed by process modelling in Aspen was done using experimental yields. A design capacity of 200 kt/year lignin feed was chosen that is equivalent to a 1 Mt/y scale lignocellulosic biorefinery. The downstream equipment was modelled to achieve the separation of the product streams defined. For determining external utility requirement, heat integration was considered and when possible gasses were combusted to cover heating demand. The models made were used in generating necessary data on material and energy flows. Next, an economic assessment was carried out by estimating operating and capital costs. Return on investment (ROI) and payback period (PBP) were used as indicators. The results of the process modelling indicate that series of separation steps are required. The downstream processing was found especially demanding in the hydrothermal upgrading process due to the presence of significant amount of unconverted lignin (34%) and water. Also, external utility requirements were found to be high. Due to the complex separations, hydrothermal upgrading process showed the highest capital cost (50 M€ more than benchmark). Whereas operating costs were found the highest for the direct HDO process (20 M€/year more than benchmark) due to the use of hydrogen. Because of high yields to valuable heavy organics (32%) and MOAMON (24%), direct HDO process showed the highest ROI (12%) and the shortest PBP (5 years). This process is found feasible with a positive net present value. However, it is very sensitive to the prices used in the calculation. The assessments at this stage are associated with large uncertainties. Nevertheless, they are useful for comparing alternatives and identifying whether a certain process should be given further consideration. Among the three processes investigated here, the direct HDO process was seen to be the most promising.

Keywords: biorefinery, economic assessment, lignin conversion, process design

Procedia PDF Downloads 248
203 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 186
202 The South African Polycentric Water Resource Governance-Management Nexus: Parlaying an Institutional Agent and Structured Social Engagement

Authors: J. H. Boonzaaier, A. C. Brent

Abstract:

South Africa, a water scarce country, experiences the phenomenon that its life supporting natural water resources is seriously threatened by the users that are totally dependent on it. South Africa is globally applauded to have of the best and most progressive water laws and policies. There are however growing concerns regarding natural water resource quality deterioration and a critical void in the management of natural resources and compliance to policies due to increasing institutional uncertainties and failures. These are in accordance with concerns of many South African researchers and practitioners that call for a change in paradigm from talk to practice and a more constructive, practical approach to governance challenges in the management of water resources. A qualitative theory-building case study through longitudinal action research was conducted from 2014 to 2017. The research assessed whether a strategic positioned institutional agent can be parlayed to facilitate and execute WRM on catchment level by engaging multiple stakeholders in a polycentric setting. Through a critical realist approach a distinction was made between ex ante self-deterministic human behaviour in the realist realm, and ex post governance-management in the constructivist realm. A congruence analysis, including Toulmin’s method of argumentation analysis, was utilised. The study evaluated the unique case of a self-steering local water management institution, the Impala Water Users Association (WUA) in the Pongola River catchment in the northern part of the KwaZulu-Natal Province of South Africa. Exploiting prevailing water resource threats, it expanded its ancillary functions from 20,000 to 300,000 ha. Embarking on WRM activities, it addressed natural water system quality assessments, social awareness, knowledge support, and threats, such as: soil erosion, waste and effluent into water systems, coal mining, and water security dimensions; through structured engagement with 21 different catchment stakeholders. By implementing a proposed polycentric governance-management model on a catchment scale, the WUA achieved to fill the void. It developed a foundation and capacity to protect the resilience of the natural environment that is critical for freshwater resources to ensure long-term water security of the Pongola River basin. Further work is recommended on appropriate statutory delegations, mechanisms of sustainable funding, sufficient penetration of knowledge to local levels to catalyse behaviour change, incentivised support from professionals, back-to-back expansion of WUAs to alleviate scale and cost burdens, and the creation of catchment data monitoring and compilation centres.

Keywords: institutional agent, water governance, polycentric water resource management, water resource management

Procedia PDF Downloads 120
201 Evaluation of Cryoablation Procedures in Treatment of Atrial Fibrillation from 3 Years' Experiences in a Single Heart Center

Authors: J. Yan, B. Pieper, B. Bucsky, B. Nasseri, S. Klotz, H. H. Sievers, S. Mohamed

Abstract:

Cryoablation is evermore applied for interventional treatment of paroxysmal (PAAF) or persistent atrial fibrillation (PEAF). In the cardiac surgery, this procedure is often combined with coronary arterial bypass graft (CABG) and valve operations. Three different methods are feasible in this sense in respect to practicing extents and mechanisms such as lone left atrial cryoablation, Cox-Maze IV and III in our heart center. 415 patients (68 ± 0.8ys, male 68.2%) with predisposed atrial fibrillation who initially required either coronary or valve operations were enrolled and divided into 3 matched groups according to deployed procedures: CryoLA-group (cryoablation of lone left atrium, n=94); Cox-Maze-IV-group (n=93) and Cox-Maze-III-group (n=8). All patients additionally received closure of the left atrial appendage (LAA) and regularly underwent three-year ambulant follow-up assessments (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation were assessed directly by means of cardiac monitor (Reveal XT, Medtronic) or of 3-day Holter electrocardiogram. Herewith, attacks frequencies of AF and their circadian patterns were systemically analyzed. Furthermore, anticoagulants and regular rate-/rhythm-controlling medications were evaluated and listed in terms of anti-rate and anti-rhythm regimens. Concerning PAAF treatment, Cox Maze IV procedure provided therapeutically acceptable effect as lone left atrium (LA) cryoablation did (5.25 ± 5.25% vs. 10.39 ± 9.96% AF-burden, p > 0.05). Interestingly, Cox Maze III method presented a better short-term effect in the PEAF therapy in comparison to lone cryoablation of LA and Cox Maze IV (0.25 ± 0.23% vs. 15.31 ± 5.99% and 9.10 ± 3.73% AF-burden within the first year, p < 0.05). But this therapeutic advantage went lost during ongoing follow-ups (26.65 ± 24.50% vs. 8.33 ± 8.06% and 15.73 ± 5.88% in 3rd follow-up year). In this way, lone LA-cryoablation established its antiarrhythmic efficacy and 69.5% patients were released from the Vit-K-antagonists, while Cox Maze IV liberated 67.2% patients from continuous anticoagulant medication. The AF-recurrences mostly performed such attacks property as less than 60min duration for all 3 procedures (p > 0.05). In the sense of the circadian distribution of the recurrence attacks, weighted by ongoing follow-ups, lone LA cryoablation achieved and stabilized the antiarrhythmic effects over time, which was especially observed in the treatment of PEAF, while Cox Maze IV and III had their antiarrhythmic effects weakened progressively. This phenomenon was likewise evaluable in the therapy of circadian rhythm of reverting AF-attacks. Furthermore, the strategy of rate control was much more often applied to support and maintain therapeutic successes obtained than the one of rhythm control. Derived from experiences in our heart center, lone LA cryoablation presented equivalent effects in the treatment of AF in comparison to Cox Maze IV and III procedures. These therapeutic successes were especially investigable in the patients suffering from persistent AF (PEAF). Additional supportive strategies such as rate control regime should be initialized and implemented to improve the therapeutic effects of the cryoablations according to appropriate criteria.

Keywords: AF-burden, atrial fibrillation, cardiac monitor, COX MAZE, cryoablation, Holter, LAA

Procedia PDF Downloads 183
200 WhatsApp as a Public Health Management Tool in India

Authors: Drishti Sharma, Mona Duggal

Abstract:

Background: WhatsApp can serve as a cost-effective, scalable, convenient, and popular medium for public health management related communication in the developing world where the existing system of communication is top-down and slow. The product supports sending and receiving a variety of media: text, photos, videos, documents, and location, as well as voice/video calls. With growing number of users of smartphones and improving access and penetration of internet, the scope of information technology remains immense in resolving the hurdles faced by traditional public health system. Poor infrastructure, gap in digital literacy, faulty documentation, strict organizational hierarchy and slow movement of information across desks and offices- all these, make WhatsApp an efficient prospect to complement the existing system for communication, feedback and leadership for public health system in India. Objective: This study investigates the benefits, challenges and limitations associated with WhatsApp usage as a public health management tool. Methods: The study was conducted within the Chandigarh Union Territory. We used a qualitative approach and conducted individual semi-structured interviews and group interviews (n = 10). Participants included medical officers (n 20), Program managers (n = 4), academicians (n=2) and administrators (n=2). Thematic and content qualitative analyses were conducted. Message log of the WhatsApp group of one of the health program was assessed. Results: Medical Officers said that WhatsApp helped them remain in touch with the program officer. They could easily give feedback and highlight those challenges which needed immediate intervention from the program managers, hence they felt supported. Also, the application helped them share pictures of their activities (meetings and field activities) with the group which they thought inspired others and gave themselves immense satisfaction. Also, it helped build stronger relationships and better coordination among themselves, the same being important in team events. For program managers, it had become a portal for coordinating large scale campaigns. Its reach and the fact that the feedback is real-time make WhatsApp ideal for district level events. Though the easy informal connectivity made them answerable to their staff but it also provided them with flexibility in operations. It turned out to be an important portal for sharing outcome and goals related feedback (both positive and negative) to the team. To be sure, using WhatsApp for the purpose of public health program presents considerable challenges, including technological barriers, organizational challenges, gender issues, confidentiality concerns and unplanned aftereffects. Nevertheless, its advantages in a low-cost setting make it an efficient alternative. Conclusion: WhatsApp has become an integral part of our lives. Use of this app for public health program management within closed groups looks promising and useful. At the same time, addressing the challenges involved would make its usage safer.

Keywords: communication, mobile technology, public health management, WhatsApp

Procedia PDF Downloads 156
199 Augmented and Virtual Reality Experiences in Plant and Agriculture Science Education

Authors: Sandra Arango-Caro, Kristine Callis-Duehl

Abstract:

The Education Research and Outreach Lab at the Donald Danforth Plant Science Center established the Plant and Agriculture Augmented and Virtual Reality Learning Laboratory (PAVRLL) to promote science education through professional development, school programs, internships, and outreach events. Professional development is offered to high school and college science and agriculture educators on the use and applications of zSpace and Oculus platforms. Educators learn to use, edit, or create lesson plans in the zSpace platform that are aligned with the Next Generation Science Standards. They also learn to use virtual reality experiences created by the PAVRLL available in Oculus (e.g. The Soybean Saga). Using a cost-free loan rotation system, educators can bring the AVR units to the classroom and offer AVR activities to their students. Each activity has user guides and activity protocols for both teachers and students. The PAVRLL also offers activities for 3D plant modeling. High school students work in teams of art-, science-, and technology-oriented students to design and create 3D models of plant species that are under research at the Danforth Center and present their projects at scientific events. Those 3D models are open access through the zSpace platform and are used by PAVRLL for professional development and the creation of VR activities. Both teachers and students acquire knowledge of plant and agriculture content and real-world problems, gain skills in AVR technology, 3D modeling, and science communication, and become more aware and interested in plant science. Students that participate in the PAVRLL activities complete pre- and post-surveys and reflection questions that evaluate interests in STEM and STEM careers, students’ perceptions of three design features of biology lab courses (collaboration, discovery/relevance, and iteration/productive failure), plant awareness, and engagement and learning in AVR environments. The PAVRLL was established in the fall of 2019, and since then, it has trained 15 educators, three of which will implement the AVR programs in the fall of 2021. Seven students have worked in the 3D plant modeling activity through a virtual internship. Due to the COVID-19 pandemic, the number of teachers trained, and classroom implementations have been very limited. It is expected that in the fall of 2021, students will come back to the schools in person, and by the spring of 2022, the PAVRLL activities will be fully implemented. This will allow the collection of enough data on student assessments that will provide insights on benefits and best practices for the use of AVR technologies in the classrooms. The PAVRLL uses cutting-edge educational technologies to promote science education and assess their benefits and will continue its expansion. Currently, the PAVRLL is applying for grants to create its own virtual labs where students can experience authentic research experiences using real Danforth research data based on programs the Education Lab already used in classrooms.

Keywords: assessment, augmented reality, education, plant science, virtual reality

Procedia PDF Downloads 155
198 Methodological Approach for the Prioritization of Different Micro-Contaminants as Potential River Basin Specific Pollutants in the Upper Tisza River Watershed

Authors: Mihail Simion Beldean-Galea, Virginia Coman, Florina Copaciu, Mihaela Vlassa, Radu Mihaiescu, Adina Croitoru, Viorel Arghius, Modest Gertsiuk, Mikola Gertsiuk

Abstract:

Taking into consideration the huge number of chemicals released into environment compartments a proper environmental risk assessment is difficult to predict due to the gap of legislation and improper toxicological assessment of chemicals compounds. In Romania as well as in many other countries from Europe, the chemical status of the water body is characterized taking into consideration the Water Framework Directive (WFD) and the substances listed in Annex X. This Annex includes 45 substances from different classes of organic compounds and heavy metals for which AA-EQS and MAC-EQS have been established. For other compounds which are not included in Annex X, different methodologies to prioritize chemicals for risk assessment and monitoring has been proposed. These methodologies take into account Predicted No-Effect Concentrations (PNECs) of different classes of chemicals compounds available from existing risk assessments or from read-across models for acute toxicity to the standard test organisms such as Daphnia magna and Selenastrum capricornutum. Our work presents the monitoring results of 30 priority substances including polyaromatic hydrocarbons, pesticides, halogenated compounds, plasticizers and heavy metals and other 34 substances from different classes of pesticides and pharmaceuticals which are not included on the list of priority substances, performed in the Upper Tisza River Watershed from Romania and Ukraine. The obtained monitoring data were used for the establishment of the list of more relevant pollutants in the studied area and to establish the potential river basin specific pollutants. For this purpose, two indicators such as the Frequency of exceedance and Extent of exceedance of Predicted no-Effect Concentration (PNEC) were evaluated. These two indicators are based on maximum environmental concentrations (MECs) of priority substances and for other pollutants is use statistically based averages of obtained measured concentration compared to the lowest PNEC thresholds. From the obtained results it can be concluded that polyaromatic hydrocarbon such as Fluoranthene, Benzo[a]pyrene, Benzo[b]fluorathene, benzo[k]fluoranthene, Benzo(g.h.i)perylene, Indeno(1.2.3-cd)-pyrene, heavy metals such as Cadmium, Lead and Nickel can be considered as river basin specific pollutants, their concentration exceeding the Annual Average EQS concentration. Other compounds such as estrone, estriol, 174-β estradiol, naproxen or some antibiotics (Penicillin G, Tetracycline or Ceftazidime) should be taken into account for a long monitoring, in some cases their concentration exceeding PNEC. Acknowledgements: This work is performed in the frame of NATO SfP Programme, Project no. 984440.

Keywords: prioritization, river basin specific pollutants, Tisza River, water framework directive

Procedia PDF Downloads 286
197 Validation of an Educative Manual for Patients with Breast Cancer Submitted to Radiation Therapy

Authors: Flavia Oliveira de A. M. Cruz, Edison Tostes Faria, Paula Elaine D. Reis

Abstract:

When the breast is submitted to radiation therapy (RT), the most common effects are pain, skin changes, mobility restrictions, local sensory alteration, and fatigue. These effects, if not managed properly, may reduce the quality of life of cancer patients and may lead to the treatment discontinuation. Therefore, promoting knowledge and guidelines for symptom management remain a high priority for patients and a challenge for health professionals, due to the need to handle side effects in a population with a life-threatening disease. Printed materials are important strategies for supporting educative activities since they help the individual to assimilate and understand the amount of information transmitted. Nurses' behavior can be systematized through the use of an educative manual, which may be effective in promoting information regarding the treatment, self-care and how to control the effects of RT at home. In view of the importance of guaranteeing the validity of the material before its use, the objective of this research was to validate the content and appearance of an educative manual for breast cancer patients undergoing RT. The Theory of Psychometrics was used for the validation process in this descriptive methodological research. A minimum agreement rate (AR) of 80% was considered to guarantee the validity of the material. The data were collected from October to December 2017, by means of two assessments tools, constructed in the form of a Likert scale, with five levels of understanding. These instruments addressed different aspects of the evaluation, in view of two different groups of participants; 17 experts in the theme area of the educative manual, and 12 women that received RT previously to treat breast cancer. The manual was titled 'Orientation Manual: radiation therapy in breast', and was focused on breast cancer patients attended at the Department of Oncology of the Brasília University Hospital (UNACON/HUB). The research project was submitted to the Research Ethics Committee at the School of Health Sciences of the University of Brasília (CAAE: 24592213.1.0000.0030). Only two items of the assessment tool for the experts, one related to the manual's ability to promote behavioral and attitude changes and the other related to the extent of its use for other health services, obtained AR < 80% and were reformulated based on the participants' suggestions and in the literature. All other items were considered appropriate and/or complete appropriate in the three blocks proposed for the experts: objectives - 89%, structure and form - 93%, and relevance - 93%; and good and/or very good in the five blocks of analysis proposed for patients: objectives - 100%, organization - 100%, writing style - 100%, appearance - 100%, and motivation. The appearance and content validation of the educative manual proposed were attended to. The educative manual was considered relevant and pertinent and may contribute to the understanding of the therapeutic process by breast cancer patients during RT, as well as support clinical practice through the nursing consultation.

Keywords: oncology nursing, nursing care, validation studies, educational technology

Procedia PDF Downloads 109
196 Unraveling Language Contact through Syntactic Dynamics of ‘Also’ in Hong Kong and Britain English

Authors: Xu Zhang

Abstract:

This article unveils an indicator of language contact between English and Cantonese in one of the Outer Circle Englishes, Hong Kong (HK) English, through an empirical investigation into 1000 tokens from the Global Web-based English (GloWbE) corpus, employing frequency analysis and logistic regression analysis. It is perceived that Cantonese and general Chinese are contextually marked by an integral underlying thinking pattern. Chinese speakers exhibit a reliance on semantic context over syntactic rules and lexical forms. This linguistic trait carries over to their use of English, affording greater flexibility to formal elements in constructing English sentences. The study focuses on the syntactic positioning of the focusing subjunct ‘also’, a linguistic element used to add new or contrasting prominence to specific sentence constituents. The English language generally allows flexibility in the relative position of 'also’, while there is a preference for close marking relationships. This article shifts attention to Hong Kong, where Cantonese and English converge, and 'also' finds counterparts in Cantonese ‘jaa’ and Mandarin ‘ye’. Employing a corpus-based data-driven method, we investigate the syntactic position of 'also' in both HK and GB English. The study aims to ascertain whether HK English exhibits a greater 'syntactic freedom,' allowing for a more distant marking relationship with 'also' compared to GB English. The analysis involves a random extraction of 500 samples from both HK and GB English from the GloWbE corpus, forming a dataset (N=1000). Exclusions are made for cases where 'also' functions as an additive conjunct or serves as a copulative adverb, as well as sentences lacking sufficient indication that 'also' functions as a focusing particle. The final dataset comprises 820 tokens, with 416 for GB and 404 for HK, annotated according to the focused constituent and the relative position of ‘also’. Frequency analysis reveals significant differences in the relative position of 'also' and marking relationships between HK and GB English. Regression analysis indicates a preference in HK English for a distant marking relationship between 'also' and its focused constituent. Notably, the subject and other constituents emerge as significant predictors of a distant position for 'also.' Together, these findings underscore the nuanced linguistic dynamics in HK English and contribute to our understanding of language contact. It suggests that future pedagogical practice should consider incorporating the syntactic variation within English varieties, facilitating leaners’ effective communication in diverse English-speaking environments and enhancing their intercultural communication competence.

Keywords: also, Cantonese, English, focus marker, frequency analysis, language contact, logistic regression analysis

Procedia PDF Downloads 37
195 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 281
194 Delegation or Assignment: Registered Nurses’ Ambiguity in Interpreting Their Scope of Practice in Long Term Care Settings

Authors: D. Mulligan, D. Casey

Abstract:

Introductory Statement: Delegation is when a registered nurse (RN) transfers a task or activity that is normally within their scope of practice to another person (delegatee). RN delegation is common practice with unregistered staff, e.g., student nurses and health care assistants (HCAs). As the role of the HCA is increasingly embedded as a direct care and support role, especially in long-term residential care for older adults, there is RN uncertainty as to their role as a delegator. The assignment is when a task is transferred to a person that is within the role specification of the delegatee. RNs in long-term care (LTC) for older people are increasingly working in teams where there are less RNs and more HCAs providing direct care to the residents. The RN is responsible and accountable for their decision to delegate and assign tasks to HCAs. In an interpretive, multiple case studies to explore how delegation of tasks by RNs to HCAs occurred in long-term care settings in Ireland the importance of the RN understanding their scope of practice emerged. Methodology: Focus group interviews and individual interviews were undertaken as part of a multiple case study. Both cases, anonymized as Case A and Case B, were within the public health service in Ireland. The case study sites were long-term care settings for older adults located in different social care divisions, and in different geographical areas. Four focus group interviews with staff nurses and three individual interviews with CNMs were undertaken. The interactive data analysis approach was the analytical framework used, with within-case and cross-case analysis. The theoretical lens of organizational role theory, applying the role episode model (REM), was used to understand, interpret, and explain the findings. Study Findings: RNs and CNMs understood the role of the nurse regulator and the scope of practice. RNs understood that the RN was accountable for the care and support provided to residents. However, RNs and CNM2s could not describe delegation in the context of their scope of practice. In both cases, the RNs did not have a standardized process for assessing HCA competence to undertake nursing tasks or interventions. RNs did not routinely supervise HCAs. Tasks were assigned and not delegated. There were differences between the cases in relation to understanding which nursing tasks required delegation. HCAs in Case A undertook clinical vital sign assessments and documentation. HCAs in Case B did not routinely undertake these activities. Delegation and assignment were influenced by the organizational factors, e.g., model of care, absence of delegation policies, inadequate RN education on delegation, and a lack of RN and HCA role clarity. Concluding Statement: Nurse staffing levels and skill mix in long-term care settings continue to change with more HCAs providing more direct care and support. With decreasing RN staffing levels RNs will be required to delegate and assign more direct care to HCAs. There is a requirement to distinguish between RN assignment and delegation at policy, regulation, and organizational levels.

Keywords: assignment, delegation, registered nurse, scope of practice

Procedia PDF Downloads 137
193 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes

Authors: Stefan Papastefanou

Abstract:

Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.

Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability

Procedia PDF Downloads 98