Search results for: historical framework
789 Measuring Self-Regulation and Self-Direction in Flipped Classroom Learning
Authors: S. A. N. Danushka, T. A. Weerasinghe
Abstract:
The diverse necessities of instruction could be addressed effectively with the support of new dimensions of ICT integrated learning such as blended learning –which is a combination of face-to-face and online instruction which ensures greater flexibility in student learning and congruity of course delivery. As blended learning has been the ‘new normality' in education, many experimental and quasi-experimental research studies provide ample of evidence on its successful implementation in many fields of studies, but it is hard to justify whether blended learning could work similarly in the delivery of technology-teacher development programmes (TTDPs). The present study is bound with the particular research uncertainty, and having considered existing research approaches, the study methodology was set to decide the efficient instructional strategies for flipped classroom learning in TTDPs. In a quasi-experimental pre-test and post-test design with a mix-method research approach, the major study objective was tested with two heterogeneous samples (N=135) identified in a virtual learning environment in a Sri Lankan university. Non-randomized informal ‘before-and-after without control group’ design was employed, and two data collection methods, identical pre-test and post-test and Likert-scale questionnaires were used in the study. Selected two instructional strategies, self-directed learning (SDL) and self-regulated learning (SRL), were tested in an appropriate instructional framework with two heterogeneous samples (pre-service and in-service teachers). Data were statistically analyzed, and an efficient instructional strategy was decided via t-test, ANOVA, ANCOVA. The effectiveness of the two instructional strategy implementation models was decided via multiple linear regression analysis. ANOVA (p < 0.05) shows that age, prior-educational qualifications, gender, and work-experiences do not impact on learning achievements of the two diverse groups of learners through the instructional strategy is changed. ANCOVA (p < 0.05) analysis shows that SDL is efficient for two diverse groups of technology-teachers than SRL. Multiple linear regression (p < 0.05) analysis shows that the staged self-directed learning (SSDL) model and four-phased model of motivated self-regulated learning (COPES Model) are efficient in the delivery of course content in flipped classroom learning.Keywords: COPES model, flipped classroom learning, self-directed learning, self-regulated learning, SSDL model
Procedia PDF Downloads 197788 How Defining the Semi-Professional Journalist Is Creating Nuance and a Familiar Future for Local Journalism
Authors: Ross Hawkes
Abstract:
The rise of hyperlocal journalism and its role in the wider local news ecosystem has been debated across both industry and academic circles, particularly via the lens of structures, models, and platforms. The nuances within this sphere are now allowing for the semi-professional journalist to emerge as a key component of the landscape at the fringes of journalism. By identifying and framing the labour of these individuals against a backdrop of change within the professional local newspaper publishing industry, it is possible to address wider debates around the ways in which participants enter and exist in the space between amateur and professional journalism. Considerations around prior experience and understanding allow us to better shape and nuance the hyperlocal landscape in order to understand the challenges and opportunities facing local news via this emergent form of semi-professional journalistic production. The disruption to local news posed by the changing nature of audiences, long-established methods of production, the rise of digital platforms, and increased competition in the online space has brought questions around the very nature and identity of local news, as well as the uncertain future and precarity which surrounds it. While the hyperlocal sector has long been associated as a potential future direction for local journalism through an alternative approach to reporting and as a mechanism for participants to pass between amateurism towards professionalism, there is now a semi-professional space being occupied in a different way. Those framed as semi-professional journalists are not necessarily transiting through this space at the fringes of the professional industry; instead, they are occupying and claiming the space as an entity within itself. By framing the semi-professional journalist through a lens of prior experience and knowledge of the sector, it is possible to identify how their motivations vary from the traditional metrics of financial gain, personal progression, or a sense of civic or community duty. While such factors may be by-products of their labour, the desire of such reporters to recreate and retain experiences and values from their past as a participant or consumer is the central basis of the framework to define the semi-professional journalist. Through understanding the motivations, aims and factors shaping the semi-professional journalist within the wider journalism and hyperlocal journalism debates and landscape, it will be possible to better frame the role they can play in sustaining the longer term provision of local news and addressing broader issues and factors within the sector.Keywords: hyperlocal, journalism, local news, semi-professionalism
Procedia PDF Downloads 28787 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 294786 Beyond Voluntary Corporate Social Responsibility: Examining the Impact of the New Mandatory Community Development Agreement in the Mining Sector of Sierra Leone
Authors: Wusu Conteh
Abstract:
Since the 1990s, neo-liberalization has become a global agenda. The free market ushered in an unprecedented drive by Multinational Corporations (MNCs) to secure mineral rights in resource-rich countries. Several governments in the Global South implemented a liberalized mining policy with support from the International Financial Institutions (IFIs). MNCs have maintained that voluntary Corporate Social Responsibility (CSR) has engendered socio-economic development in mining-affected communities. However, most resource-rich countries are struggling to transform the resources into sustainable socio-economic development. They are trapped in what has been widely described as the ‘resource curse.’ In an attempt to address this resource conundrum, the African Mining Vision (AMV) of 2009 developed a model on resource governance. The advent of the AMV has engendered the introduction of mandatory community development agreement (CDA) into the legal framework of many countries in Africa. In 2009, Sierra Leone enacted the Mines and Minerals Act that obligates mining companies to invest in Primary Host Communities. The study employs interviews and field observation techniques to explicate the dynamics of the CDA program. A total of 25 respondents -government officials, NGOs/CSOs and community stakeholders were interviewed. The study focuses on a case study of the Sierra Rutile CDA program in Sierra Leone. Extant scholarly works have extensively explored the resource curse and voluntary CSR. There are limited studies to uncover the mandatory CDA and its impact on socio-economic development in mining-affected communities. Thus, the purpose of this study is to explicate the impact of the CDA in Sierra Leone. Using the theory of change helps to understand how the availability of mandatory funds can empower communities to take an active part in decision making related to the development of the communities. The results show that the CDA has engendered a predictable fund for community development. It has also empowered ordinary members of the community to determine the development program. However, the CDA has created a new ground for contestations between the pre-existing local governance structure (traditional authority) and the newly created community development committee (CDC) that is headed by an ordinary member of the community.Keywords: community development agreement, impact, mandatory, participation
Procedia PDF Downloads 123785 Assessing Gender Mainstreaming Practices in the Philippine Basic Education System
Authors: Michelle Ablian Mejica
Abstract:
Female drop-outs due to teenage pregnancy and gender-based violence in schools are two of the most contentious and current gender-related issues faced by the Department of Education (DepEd) in the Philippines. The country adopted gender mainstreaming as the main strategy to eliminate gender inequalities in all aspects of the society including education since 1990. This research examines the extent and magnitude by which gender mainstreaming is implemented in the basic education from the national to the school level. It seeks to discover the challenges faced by the central and field offices, particularly by the principals who served as decision-makers in the schools where teaching and learning take place and where opportunities that may aggravate, conform and transform gender inequalities and hierarchies exist. The author conducted surveys and interviews among 120 elementary and secondary principals in the Division of Zambales as well as selected gender division and regional focal persons within Region III- Central Luzon. The study argues that DepEd needs to review, strengthen and revitalize its gender mainstreaming because the efforts do not penetrate the schools and are not enough to lessen or eliminate gender inequalities within the schools. The study found out some of the major challenges in the implementation of gender mainstreaming as follows: absence of a national gender-responsive education policy framework, lack of gender responsive assessment and monitoring tools, poor quality of gender and development related training programs and poor data collection and analysis mechanism. Furthermore, other constraints include poor coordination mechanism among implementing agencies, lack of clear implementation strategy, ineffective or poor utilization of GAD budget and lack of teacher and learner centered GAD activities. The paper recommends the review of the department’s gender mainstreaming efforts to align with the mandate of the agency and provide gender responsive teaching and learning environment. It suggests that the focus must be on formulation of gender responsive policies and programs, improvement of the existing mechanism and conduct of trainings focused on gender analysis, budgeting and impact assessment not only for principals and GAD focal point system but also to parents and other school stakeholders.Keywords: curriculum and instruction, gender analysis, gender budgeting, gender impact assessment
Procedia PDF Downloads 344784 NFTs, between Opportunities and Absence of Legislation: A Study on the Effect of the Rulings of the OpenSea Case
Authors: Andrea Ando
Abstract:
The development of the blockchain has been a major innovation in the technology field. It opened the door to the creation of novel cyberassets and currencies. In more recent times, the non-fungible tokens have started to be at the centre of media attention. Their popularity has been increasing since 2021, and they represent the latest in the world of distributed ledger technologies and cryptocurrencies. It seems more and more likely that NFTs will play a more important role in our online interactions. They are indeed increasingly taking part in the arts and technology sectors. Their impact on society and the market is still very difficult to define, but it is very likely that there will be a turning point in the world of digital assets. There are some examples of their peculiar behaviour and effect in our contemporary tech-market: the former CEO of the famous social media site Twitter sold an NFT of his first tweet for around £2,1 million ($2,5 million), or the National Basketball Association has created a platform to sale unique moment and memorabilia from the history of basketball through the non-fungible token technology. Their growth, as imaginable, paved the way for civil disputes, mostly regarding their position under the current intellectual property law in each jurisdiction. In April 2022, the High Court of England and Wales ruled in the OpenSea case that non-fungible tokens can be considered properties. The judge, indeed, concluded that the cryptoasset had all the indicia of property under common law (National Provincial Bank v. Ainsworth). The research has demonstrated that the ruling of the High Court is not providing enough answers to the dilemma of whether minting an NFT is a violation or not of intellectual property and/or property rights. Indeed, if, on the one hand, the technology follows the framework set by the case law (e.g., the 4 criteria of Ainsworth), on the other hand, the question that arises is what is effectively protected and owned by both the creator and the purchaser. Then the question that arises is whether a person has ownership of the cryptographed code, that it is indeed definable, identifiable, intangible, distinct, and has a degree of permanence, or what is attached to this block-chain, hence even a physical object or piece of art. Indeed, a simple code would not have any financial importance if it were not attached to something that is widely recognised as valuable. This was demonstrated first through the analysis of the expectations of intellectual property law. Then, after having laid the foundation, the paper examined the OpenSea case, and finally, it analysed whether the expectations were met or not.Keywords: technology, technology law, digital law, cryptoassets, NFTs, NFT, property law, intellectual property law, copyright law
Procedia PDF Downloads 89783 The Impact of Task Type and Group Size on Dialogue Argumentation between Students
Authors: Nadia Soledad Peralta
Abstract:
Within the framework of socio-cognitive interaction, argumentation is understood as a psychological process that supports and induces reasoning and learning. Most authors emphasize the great potential of argumentation to negotiate with contradictions and complex decisions. So argumentation is a target for researchers who highlight the importance of social and cognitive processes in learning. In the context of social interaction among university students, different types of arguments are analyzed according to group size (dyads and triads) and the type of task (reading of frequency tables, causal explanation of physical phenomena, the decision regarding moral dilemma situations, and causal explanation of social phenomena). Eighty-nine first-year social sciences students of the National University of Rosario participated. Two groups were formed from the results of a pre-test that ensured the heterogeneity of points of view between participants. Group 1 consisted of 56 participants (performance in dyads, total: 28), and group 2 was formed of 33 participants (performance in triads, total: 11). A quasi-experimental design was performed in which effects of the two variables (group size and type of task) on the argumentation were analyzed. Three types of argumentation are described: authentic dialogical argumentative resolutions, individualistic argumentative resolutions, and non-argumentative resolutions. The results indicate that individualistic arguments prevail in dyads. That is, although people express their own arguments, there is no authentic argumentative interaction. Given that, there are few reciprocal evaluations and counter-arguments in dyads. By contrast, the authentically dialogical argument prevails in triads, showing constant feedback between participants’ points of view. It was observed that, in general, the type of task generates specific types of argumentative interactions. However, it is possible to emphasize that the authentically dialogic arguments predominate in the logical tasks, whereas the individualists or pseudo-dialogical are more frequent in opinion tasks. Nerveless, these relationships between task type and argumentative mode are best clarified in an interactive analysis based on group size. Finally, it is important to stress the value of dialogical argumentation in educational domains. Argumentative function not only allows a metacognitive reflection about their own point of view but also allows people to benefit from exchanging points of view in interactive contexts.Keywords: sociocognitive interaction, argumentation, university students, size of the grup
Procedia PDF Downloads 83782 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 240781 “I” on the Web: Social Penetration Theory Revised
Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology
Abstract:
The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information
Procedia PDF Downloads 372780 Transparency Obligations under the AI Act Proposal: A Critical Legal Analysis
Authors: Michael Lognoul
Abstract:
In April 2021, the European Commission released its AI Act Proposal, which is the first policy proposal at the European Union level to target AI systems comprehensively, in a horizontal manner. This Proposal notably aims to achieve an ecosystem of trust in the European Union, based on the respect of fundamental rights, regarding AI. Among many other requirements, the AI Act Proposal aims to impose several generic transparency obligationson all AI systems to the benefit of natural persons facing those systems (e.g. information on the AI nature of systems, in case of an interaction with a human). The Proposal also provides for more stringent transparency obligations, specific to AI systems that qualify as high-risk, to the benefit of their users, notably on the characteristics, capabilities, and limitations of the AI systems they use. Against that background, this research firstly presents all such transparency requirements in turn, as well as related obligations, such asthe proposed obligations on record keeping. Secondly, it focuses on a legal analysis of their scope of application, of the content of the obligations, and on their practical implications. On the scope of transparency obligations tailored for high-risk AI systems, the research notably notes that it seems relatively narrow, given the proposed legal definition of the notion of users of AI systems. Hence, where end-users do not qualify as users, they may only receive very limited information. This element might potentially raise concern regarding the objective of the Proposal. On the content of the transparency obligations, the research highlights that the information that should benefit users of high-risk AI systems is both very broad and specific, from a technical perspective. Therefore, the information required under those obligations seems to create, prima facie, an adequate framework to ensure trust for users of high-risk AI systems. However, on the practical implications of these transparency obligations, the research notes that concern arises due to potential illiteracy of high-risk AI systems users. They might not benefit from sufficient technical expertise to fully understand the information provided to them, despite the wording of the Proposal, which requires that information should be comprehensible to its recipients (i.e. users).On this matter, the research points that there could be, more broadly, an important divergence between the level of detail of the information required by the Proposal and the level of expertise of users of high-risk AI systems. As a conclusion, the research provides policy recommendations to tackle (part of) the issues highlighted. It notably recommends to broaden the scope of transparency requirements for high-risk AI systems to encompass end-users. It also suggests that principles of explanation, as they were put forward in the Guidelines for Trustworthy AI of the High Level Expert Group, should be included in the Proposal in addition to transparency obligations.Keywords: aI act proposal, explainability of aI, high-risk aI systems, transparency requirements
Procedia PDF Downloads 319779 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices
Authors: Alena Kulikova, Tatjana Kanonire
Abstract:
Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing
Procedia PDF Downloads 80778 Developing a Decision-Making Tool for Prioritizing Green Building Initiatives
Authors: Tayyab Ahmad, Gerard Healey
Abstract:
Sustainability in built environment sector is subject to many development constraints. Building projects are developed under different requirements of deliverables which makes each project unique. For an owner organization, i.e., a higher-education institution, involved in a significant building stock, it is important to prioritize some of the sustainability initiatives over the others in order to align the sustainable building development with organizational goals. The point-based green building rating tools i.e. Green Star, LEED, BREEAM are becoming increasingly popular and are well-acknowledged worldwide for verifying a sustainable development. It is imperative to synthesize a multi-criteria decision-making tool that can capitalize on the point-based methodology of rating systems while customizing the sustainable development of building projects according to the individual requirements and constraints of the client organization. A multi-criteria decision-making tool for the University of Melbourne is developed that builds on the action-learning and experience of implementing Green Buildings at the University of Melbourne. The tool evaluates the different sustainable building initiatives based on the framework of Green Star rating tool of Green Building Council of Australia. For each different sustainability initiative the decision-making tool makes an assessment based on at least five performance criteria including the ease with which a sustainability initiative can be achieved and the potential of a sustainability initiative to enhance project objectives, reduce life-cycle costs, enhance University’s reputation, and increase the confidence in quality construction. The use of a weighted aggregation mathematical model in the proposed tool can have a considerable role in the decision-making process of a Green Building project by indexing the Green Building initiatives in terms of organizational priorities. The index value of each initiative will be based on its alignment with some of the key performance criteria. The usefulness of the decision-making tool is validated by conducting structured interviews with some of the key stakeholders involved in the development of sustainable building projects at the University of Melbourne. The proposed tool is realized to help a client organization in deciding that within limited resources which sustainability initiatives and practices are more important to be pursued than others.Keywords: higher education institution, multi-criteria decision-making tool, organizational values, prioritizing sustainability initiatives, weighted aggregation model
Procedia PDF Downloads 234777 Campaigns of Youth Empowerment and Unemployment In Development Discourses: In the Case of Ethiopia
Abstract:
In today’s high decrement figure of the global economy, nations are facing many economic, social and political challenges; universally, there is high distress of food and other survival insecurity. Further, as a result of conflict, natural disasters, and leadership influences, youths are existentially less empowered and unemployed, especially in developing countries. With this situation to handle well challenges, it’s important to search, investigate and deliberate about youth, unemployment, empowerment and possible management fashions, as youths have the potential to carry and fight such battles. The method adopted is a qualitative analysis of secondary data sources in youth empowerment, unemployment and development as an inclusive framework. Youth unemployment is a major development headache for most African countries. In Ethiopia, following weak youth empowerment, youth unemployment has increased from time to time, and quality education and organization linkage matter as an important constraint. As a management challenge, although accessibility of quality education for Ethiopian youths is an important constraint, the country's youths are fortified deceptively and harassed in a vicious political challenge in their struggle to fetch social and economic changes in the country. Further, thousands of youths are inactivated, criminalized and lost their lives and this makes youths hopeless anger in their lives and pushes them further to be exposed for addictions, prostitution, violence, and illegitimate migrations. This youth challenge wasn’t only destined for African countries; rather, indeed, it was a global burden and headed as a global agenda. As a resolution, the construction of a healthy education system can create independent youths who acquire success and accelerate development. Developing countries should ensue development in the cultivation of empowerment tools through long and short-term education, implementing policy in action, diminishing wide-ranging gaps of (religion, ethnicity & region), and take high youth population as an opportunity and empower them. Further managing and empowering youths to be involved in decision-making, giving political weight and building a network of organizations to easily access job opportunities are important suggestions to save youths in work, for both increasing their income and the country's food security balance.Keywords: development, Ethiopia, management, unemployment, youth empowerment
Procedia PDF Downloads 59776 Euthanasia as a Case of Judicial Entrepreneurship in India: Analyzing the Role of the Supreme Court in the Policy Process of Euthanasia
Authors: Aishwarya Pothula
Abstract:
Euthanasia in India is a politically dormant policy issue in the sense that discussions around it are sporadic in nature (usually with developments in specific cases) and it stays as a dominant issue in the public domain for a fleeting period. In other words, it is a non-political issue that has been unable to successfully get on the policy agenda. This paper studies how the Supreme Court of India (SC) plays a role in euthanasia’s policy making. In 2011, the SC independently put a law in place that legalized passive euthanasia through its judgement in the Aruna Shanbaug v. Union of India case. According to this, it is no longer illegal to withhold/withdraw a patient’s medical treatment in certain cases. This judgement, therefore, is the empirical focus of this paper. The paper essentially employs two techniques of discourse analysis to study the SC’s system of argumentation. The two methods, Text Analysis using Gasper’s Analysis Table and Frame Analysis – are complemented by two discourse techniques called metaphor analysis and lexical analysis. The framework within which the analysis is conducted lies in 1) the judicial process of India, i.e. the SC procedures and the Constitutional rules and provisions, and 2) John W. Kingdon’s theory of policy windows and policy entrepreneurs. The results of this paper are three-fold: first, the SC dismiss the petitioner’s request for passive euthanasia on inadequate and weak grounds, thereby setting no precedent for the historic law they put in place. In other words, they leave the decision open for the Parliament to act upon. Hence the judgement, as opposed to arguments by many, is by no means an instance of judicial activism/overreach. Second, they define euthanasia in a way that resonates with existing broader societal themes. They combine this with a remarkable use of authoritative and protective tones/stances to settle at an intermediate position that balances the possible opposition to their role in the process and what they (perhaps) perceive to be an optimal solution. Third, they soften up the policy community (including the public) to the idea of passive euthanasia leading it towards a Parliamentarian legislation. They achieve this by shaping prevalent principles, provisions and worldviews through an astute use of the legal instruments at their disposal. This paper refers to this unconventional role of the SC as ‘judicial entrepreneurship’ which is also the first scholarly contribution towards research on euthanasia as a policy issue in India.Keywords: argumentation analysis, Aruna Ramachandra Shanbaug, discourse analysis, euthanasia, judicial entrepreneurship, policy-making process, supreme court of India
Procedia PDF Downloads 267775 Changes in Skin Microbiome Diversity According to the Age of Xian Women
Authors: Hanbyul Kim, Hye-Jin Kin, Taehun Park, Woo Jun Sul, Susun An
Abstract:
Skin is the largest organ of the human body and can provide the diverse habitat for various microorganisms. The ecology of the skin surface selects distinctive sets of microorganisms and is influenced by both endogenous intrinsic factors and exogenous environmental factors. The diversity of the bacterial community in the skin also depends on multiple host factors: gender, age, health status, location. Among them, age-related changes in skin structure and function are attributable to combinations of endogenous intrinsic factors and exogenous environmental factors. Skin aging is characterized by a decrease in sweat, sebum and the immune functions thus resulting in significant alterations in skin surface physiology including pH, lipid composition, and sebum secretion. The present study gives a comprehensive clue on the variation of skin microbiota and the correlations between ages by analyzing and comparing the metagenome of skin microbiome using Next Generation Sequencing method. Skin bacterial diversity and composition were characterized and compared between two different age groups: younger (20 – 30y) and older (60 - 70y) Xian, Chinese women. A total of 73 healthy women meet two conditions: (I) living in Xian, China; (II) maintaining healthy skin status during the period of this study. Based on Ribosomal Database Project (RDP) database, skin samples of 73 participants were enclosed with ten most abundant genera: Chryseobacterium, Propionibacterium, Enhydrobacter, Staphylococcus and so on. Although these genera are the most predominant genus overall, each genus showed different proportion in each group. The most dominant genus, Chryseobacterium was more present relatively in Young group than in an old group. Similarly, Propionibacterium and Enhydrobacter occupied a higher proportion of skin bacterial composition of the young group. Staphylococcus, in contrast, inhabited more in the old group. The beta diversity that represents the ratio between regional and local species diversity showed significantly different between two age groups. Likewise, The Principal Coordinate Analysis (PCoA) values representing each phylogenetic distance in the two-dimensional framework using the OTU (Operational taxonomic unit) values of the samples also showed differences between the two groups. Thus, our data suggested that the composition and diversification of skin microbiomes in adult women were largely affected by chronological and physiological skin aging.Keywords: next generation sequencing, age, Xian, skin microbiome
Procedia PDF Downloads 155774 Bandgap Engineering of CsMAPbI3-xBrx Quantum Dots for Intermediate Band Solar Cell
Authors: Deborah Eric, Abbas Ahmad Khan
Abstract:
Lead halide perovskites quantum dots have attracted immense scientific and technological interest for successful photovoltaic applications because of their remarkable optoelectronic properties. In this paper, we have simulated CsMAPbI3-xBrx based quantum dots to implement their use in intermediate band solar cells (IBSC). These types of materials exhibit optical and electrical properties distinct from their bulk counterparts due to quantum confinement. The conceptual framework provides a route to analyze the electronic properties of quantum dots. This layer of quantum dots optimizes the position and bandwidth of IB that lies in the forbidden region of the conventional bandgap. A three-dimensional MAPbI3 quantum dot (QD) with geometries including spherical, cubic, and conical has been embedded in the CsPbBr3 matrix. Bound energy wavefunction gives rise to miniband, which results in the formation of IB. If there is more than one miniband, then there is a possibility of having more than one IB. The optimization of QD size results in more IBs in the forbidden region. One band time-independent Schrödinger equation using the effective mass approximation with step potential barrier is solved to compute the electronic states. Envelope function approximation with BenDaniel-Duke boundary condition is used in combination with the Schrödinger equation for the calculation of eigen energies and Eigen energies are solved for the quasi-bound states using an eigenvalue study. The transfer matrix method is used to study the quantum tunneling of MAPbI3 QD through neighbor barriers of CsPbI3. Electronic states are computed using Schrödinger equation with effective mass approximation by considering quantum dot and wetting layer assembly. Results have shown the varying the quantum dot size affects the energy pinning of QD. Changes in the ground, first, second state energies have been observed. The QD is non-zero at the center and decays exponentially to zero at boundaries. Quasi-bound states are characterized by envelope functions. It has been observed that conical quantum dots have maximum ground state energy at a small radius. Increasing the wetting layer thickness exhibits energy signatures similar to bulk material for each QD size.Keywords: perovskite, intermediate bandgap, quantum dots, miniband formation
Procedia PDF Downloads 165773 Diverse High-Performing Teams: An Interview Study on the Balance of Demands and Resources
Authors: Alana E. Jansen
Abstract:
With such a large proportion of organisations relying on the use of team-based structures, it is surprising that so few teams would be classified as high-performance teams. While the impact of team composition on performance has been researched frequently, there have been conflicting findings as to the effects, particularly when examined alongside other team factors. To broaden the theoretical perspectives on this topic and potentially explain some of the inconsistencies in research findings left open by other various models of team effectiveness and high-performing teams, the present study aims to use the Job-Demands-Resources model, typically applied to burnout and engagement, as a framework to examine how team composition factors (particularly diversity in team member characteristics) can facilitate or hamper team effectiveness. This study used a virtual interview design where participants were asked to both rate and describe their experiences, in one high-performing and one low-performing team, over several factors relating to demands, resources, team composition, and team effectiveness. A semi-structured interview protocol was developed, which combined the use of the Likert style and exploratory questions. A semi-targeted sampling approach was used to invite participants ranging in age, gender, and ethnic appearance (common surface-level diversity characteristics) and those from different specialties, roles, educational and industry backgrounds (deep-level diversity characteristics). While the final stages of data analyses are still underway, thematic analysis using a grounded theory approach was conducted concurrently with data collection to identify the point of thematic saturation, resulting in 35 interviews being completed. Analyses examine differences in perceptions of demands and resources as they relate to perceived team diversity. Preliminary results suggest that high-performing and low-performing teams differ in perceptions of the type and range of both demands and resources. The current research is likely to offer contributions to both theory and practice. The preliminary findings suggest there is a range of demands and resources which vary between high and low-performing teams, factors which may play an important role in team effectiveness research going forward. Findings may assist in explaining some of the more complex interactions between factors experienced in the team environment, making further progress towards understanding the intricacies of why only some teams achieve high-performance status.Keywords: diversity, high-performing teams, job demands and resources, team effectiveness
Procedia PDF Downloads 187772 Early Modern Controversies of Mobility within the Spanish Empire: Francisco De Vitoria and the Peaceful Right to Travel
Authors: Beatriz Salamanca
Abstract:
In his public lecture ‘On the American Indians’ given at the University of Salamanca in 1538-39, Francisco de Vitoria presented an unsettling defense of freedom of movement, arguing that the Spanish had the right to travel and dwell in the New World, since it was considered part of the law of nations [ius gentium] that men enjoyed free mutual intercourse anywhere they went. The principle of freedom of movement brought hopeful expectations, promising to bring mankind together and strengthen the ties of fraternity. However, it led to polemical situations when those whose mobility was in question represented a harmful threat or was for some reason undesired. In this context, Vitoria’s argument has been seen on multiple occasions as a justification of the expansion of the Spanish empire. In order to examine the meaning of Vitoria’s defense of free mobility, a more detailed look at Vitoria’s text is required, together with the study of some of his earliest works, among them, his commentaries on Thomas Aquinas’s Summa Theologiae, where he presented relevant insights on the idea of the law of nations. In addition, it is necessary to place Vitoria’s work in the context of the intellectual tradition he belonged to and the responses he obtained from some of his contemporaries who were concerned with similar issues. The claim of this research is that the Spanish right to travel advocated by Vitoria was not intended to be interpreted in absolute terms, for it had to serve the purpose of bringing peace and unity among men, and could not contradict natural law. In addition, Vitoria explicitly observed that the right to travel was only valid if the Spaniards caused no harm, a condition that has been underestimated by his critics. Therefore, Vitoria’s legacy is of enormous value as it initiated a long lasting discussion regarding the question of the grounds under which human mobility could be restricted. Again, under Vitoria’s argument it was clear that this freedom was not absolute, but the controversial nature of his defense of Spanish mobility demonstrates how difficult it was and still is to address the issue of the circulation of peoples across frontiers, and shows the significance of this discussion in today’s globalized world, where the rights and wrongs of notions like immigration, international trade or foreign intervention still lack sufficient consensus. This inquiry about Vitoria’s defense of the principle of freedom of movement is being placed here against the background of the history of political thought, political theory, international law, and international relations, following the methodological framework of contextual history of the ‘Cambridge School’.Keywords: Francisco de Vitoria, freedom of movement, law of nations, ius gentium, Spanish empire
Procedia PDF Downloads 366771 A Method to Evaluate and Compare Web Information Extractors
Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman
Abstract:
Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.Keywords: web information extractors, information extraction evaluation method, Google scholar, web
Procedia PDF Downloads 248770 Tommy: Communication in Education about Disability
Authors: Karen V. Lee
Abstract:
The background and significance of this study involve communication in education by a faculty advisor exploring story and music that informs others about a disabled teacher. Social issues draw deep reflection about the emotional turmoil. As a musician becoming a teacher is a passionate yet complex endeavor, the faculty advisor shares a poetic but painful story about a disabled teacher being inducted into the teaching profession. The qualitative research method as theoretical framework draws on autoethnography of music and story where the faculty advisor approaches a professor for advice. His musicianship shifts her forward, backward, and sideways through feelings that evoke and provoke curriculum to remove communication barriers in education. They discover they do not transfer knowledge from educational method classes. Instead, the autoethnography embeds musical language as a metaphorical conduit for removing communication barriers in teacher education. Sub-themes involve communication barriers and educational technologies to ensure teachers receive social, emotional, physical, spiritual, and intervention disability resources that evoke visceral, emotional responses from the audience. Major findings of the study discover how autoethnography of music and story bring the authors to understand wider political issues of the practicum internship for teachers with disabilities. An epiphany reveals the irony of living in a culture of both uniformity and diversity. They explore the constructs of secrecy, ideology, abnormality, and marginalization by evoking visceral and emotional responses from the audience. As the voices harmonize plot, climax, characterization, and denouement, they dramatize meaning that is episodic yet incomplete to highlight the circumstances surrounding the disabled protagonist’s life. In conclusion, the qualitative research method argues for embracing storied experiences that depict communication in education. Scholarly significance embraces personal thoughts and feelings as a way of understanding social phenomena while highlighting the importance of removing communication barriers in education. The circumstance about a teacher with a disability is not uncommon in society. Thus, the authors resolve to removing barriers in education by using stories to transform the personal and cultural influences that provoke new ways of thinking about the curriculum for a disabled teacher.Keywords: communication in education, communication barriers, autoethnography, teaching
Procedia PDF Downloads 240769 The Effect of Realizing Emotional Synchrony with Teachers or Peers on Children’s Linguistic Proficiency: The Case Study of Uji Elementary School
Authors: Reiko Yamamoto
Abstract:
This paper reports on a joint research project in which a researcher in applied linguistics and elementary school teachers in Japan explored new ways to realize emotional synchrony in a classroom in childhood education. The primary purpose of this project was to develop a cross-curriculum of the first language (L1) and second language (L2) based on the concept of plurilingualism. This concept is common in Europe, and can-do statements are used in forming the standard of linguistic proficiency in any language; these are attributed to the action-oriented approach in the Common European Framework of Reference for Languages (CEFR). CEFR has a basic tenet of language education: improving communicative competence. Can-do statements are classified into five categories based on the tenet: reading, writing, listening, speaking/ interaction, and speaking/ speech. The first approach of this research was to specify the linguistic proficiency of the children, who are still developing their L1. Elementary school teachers brainstormed and specified the linguistic proficiency of the children as the competency needed to synchronize with others – teachers or peers – physically and mentally. The teachers formed original can-do statements in language proficiency on the basis of the idea that emotional synchrony leads to understanding others in communication. The research objectives are to determine the effect of language education based on the newly developed curriculum and can-do statements. The participants of the experiment were 72 third-graders in Uji Elementary School, Japan. For the experiment, 17 items were developed from the can-do statements formed by the teachers and divided into the same five categories as those of CEFR. A can-do checklist consisting of the items was created. The experiment consisted of three steps: first, the students evaluated themselves using the can-do checklist at the beginning of the school year. Second, one year of instruction was given to the students in Japanese and English classes (six periods a week). Third, the students evaluated themselves using the same can-do checklist at the end of the school year. The results of statistical analysis showed an enhancement of linguistic proficiency of the students. The average results of the post-check exceeded that of the pre-check in 12 out of the 17 items. Moreover, significant differences were shown in four items, three of which belonged to the same category: speaking/ interaction. It is concluded that children can get to understand others’ minds through physical and emotional synchrony. In particular, emotional synchrony is what teachers should aim at in childhood education.Keywords: elementary school education, emotional synchrony, language proficiency, sympathy with others
Procedia PDF Downloads 168768 Reconceptualizing “Best Practices” in Public Sector
Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani
Abstract:
Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.Keywords: benchmarking, action research, critical realism, best practices, public sector
Procedia PDF Downloads 127767 Enhancing Thai In-Service Science Teachers' Technological Pedagogical Content Knowledge Integrating Local Context and Sufficiency Economy into Science Teaching
Authors: Siriwan Chatmaneerungcharoen
Abstract:
An emerging body of ‘21st century skills’-such as adaptability, complex communication skills, technology skills and the ability to solve non-routine problems--are valuable across a wide range of jobs in the national economy. Within the Thai context, a focus on the Philosophy of Sufficiency Economy is integrated into Science Education. Thai science education has advocated infusing 21st century skills and Philosophy of Sufficiency Economy into the school curriculum and several educational levels have launched such efforts. Therefore, developing science teachers to have proper knowledge is the most important factor to success of the goals. The purposes of this study were to develop 40 Cooperative Science teachers’ Technological Pedagogical Content Knowledge (TPACK) and to develop Professional Development Model integrated with Co-teaching Model and Coaching System (Co-TPACK). TPACK is essential to career development for teachers. Forty volunteer In-service teachers who were science cooperative teachers participated in this study for 2 years. Data sources throughout the research project consisted of teacher refection, classroom observations, Semi-structure interviews, Situation interview, questionnaires and document analysis. Interpretivist framework was used to analyze the data. Findings indicate that at the beginning, the teachers understood only the meaning of Philosophy of Sufficiency Economy but they did not know how to integrate the Philosophy of Sufficiency Economy into their science classrooms. Mostly, they preferred to use lecture based teaching and experimental teaching styles. While the Co- TPACK was progressing, the teachers had blended their teaching styles and learning evaluation methods. Co-TPACK consists of 3 cycles (Student Teachers’ Preparation Cycle, Cooperative Science Teachers Cycle, Collaboration cycle (Co-teaching, Co-planning, and Co-Evaluating and Coaching System)).The Co-TPACK enhances the 40 cooperative science teachers, student teachers and university supervisor to exchange their knowledge and experience on teaching science. There are many channels that they used for communication including online. They have used more Phuket context-integrated lessons, technology-integrated teaching and Learning that can explicit Philosophy of Sufficiency Economy. Their sustained development is shown in their lesson plans and teaching practices.Keywords: technological pedagogical content knowledge, philosophy of sufficiency economy, professional development, coaching system
Procedia PDF Downloads 464766 Examining Reading Comprehension Skills Based on Different Reading Comprehension Frameworks and Taxonomies
Authors: Seval Kula-Kartal
Abstract:
Developing students’ reading comprehension skills is an aim that is difficult to accomplish and requires to follow long-term and systematic teaching and assessment processes. In these processes, teachers need tools to provide guidance to them on what reading comprehension is and which comprehension skills they should develop. Due to a lack of clear and evidence-based frameworks defining reading comprehension skills, especially in Turkiye, teachers and students mostly follow various processes in the classrooms without having an idea about what their comprehension goals are and what those goals mean. Since teachers and students do not have a clear view of comprehension targets, strengths, and weaknesses in students’ comprehension skills, the formative feedback processes cannot be managed in an effective way. It is believed that detecting and defining influential comprehension skills may provide guidance both to teachers and students during the feedback process. Therefore, in the current study, some of the reading comprehension frameworks that define comprehension skills operationally were examined. The aim of the study is to develop a simple and clear framework that can be used by teachers and students during their teaching, learning, assessment, and feedback processes. The current study is qualitative research in which documents related to reading comprehension skills were analyzed. Therefore, the study group consisted of recourses and frameworks which made big contributions to theoretical and operational definitions of reading comprehension. A content analysis was conducted on the resources included in the study group. To determine the validity of the themes and sub-categories revealed as the result of content analysis, three educational assessment experts were asked to examine the content analysis results. The Fleiss’ Cappa coefficient revealed that there is consistency among themes and categories defined by three different experts. The content analysis of the reading comprehension frameworks revealed that comprehension skills could be examined under four different themes. The first and second themes focus on understanding information given explicitly or implicitly within a text. The third theme includes skills used by the readers to make connections between their personal knowledge and the information given in the text. Lastly, the fourth theme focus on skills used by readers to examine the text with a critical view. The results suggested that fundamental reading comprehension skills can be examined under four themes. Teachers are recommended to use these themes in their reading comprehension teaching and assessment processes. Acknowledgment: This research is supported by Pamukkale University Scientific Research Unit within the project, whose title is Developing A Reading Comprehension Rubric.Keywords: reading comprehension, assessing reading comprehension, comprehension taxonomies, educational assessment
Procedia PDF Downloads 82765 Socio-cultural Influence on Teachers’ Preparedness for Inclusive Education: A Mixed Methods Study in the Nepalese Context
Authors: Smita Nepal
Abstract:
Despite being on the global education reform agenda for over two decades, interpretations and practices of inclusive education vary widely across the world. In Nepal, similar to many other developing countries, inclusive education is still an emerging concept and limited research is available to date in relation to how inclusive education is conceptualized and implemented here. Moreover, very little is known about how teachers who are at the frontline of providing inclusive education understand this concept and how well they are prepared to teach inclusively. This study addresses this research gap by investigating an overarching research question, ‘How prepared are Nepalese teachers to practice inclusive pedagogy?’ Different societies and cultures may have different interpretations of the concepts of diversity and inclusion. Acknowledging that such contextual differences influence how these issues are addressed, such as preparing teachers for providing inclusive education, this study has investigated the research questions using a sociocultural conceptual framework. A sequential mixed-method research design involved quantitative data from 203 survey responses collected in the first phase, followed by qualitative data in the second phase collected through semi-structured interviews with teachers. Descriptive analysis of the quantitative data and reflexive thematic analysis of the qualitative data revealed a narrow understanding of inclusive education in the participating Nepalese teachers with limited preparedness for implementing inclusive pedagogy. Their interpretation of inclusion substantially included the need for non-discrimination and the provision of equal opportunities. This interpretation was found to be influenced by the social context where a lack of a deep understanding of human diversity was reported, leading to discriminatory attitudes and practices. In addition, common norms established in society that experiencing privileges or disadvantages was normal for diverse groups of people appeared to have led to limited efforts to enhance teachers’ understanding of and preparedness for inclusive education. This study has significant implications, not only in the Nepalese context but globally, for reform in policies and practices and for strengthening the teacher education and professional development system to promote inclusion in education. In addition, the significance of this research lies in highlighting the importance of further context-specific research in this area to ensure inclusive education in a real sense by valuing socio-cultural differences.Keywords: inclusive education, inclusive pedagogy, sociocultural context, teacher preparation
Procedia PDF Downloads 71764 Policy Implications of Cashless Banking on Nigeria’s Economy
Authors: Oluwabiyi Adeola Ayodele
Abstract:
This study analysed the Policy and general issues that have arisen over time in Nigeria’ Cashless banking environment as a result of the lack of a Legal framework on Electronic banking in Nigeria. It undertook an in-depth study of the cashless banking system. It discussed the evolution, growth and development of cashless banking in Nigeria; It revealed the expected benefits of the cashless banking system; It appraised regulatory issues and other prevalent problems on cashless banking in Nigeria; and made appropriate recommendations where necessary. The study relied on primary and secondary sources of information. The primary sources included the Constitution of the Federal Republic of Nigeria, Statutes, Conventions and Judicial decisions, while the secondary sources included Books, Journals Articles, Newspapers and Internet Materials. The study revealed that cashless banking has been adopted in Nigeria but still at the developing stage. It revealed that there is no law for the regulation of cashless banking in Nigeria, what Nigeria relies on for regulation is the Central Bank of Nigeria’s Cashless Policy, 2014. The Banks and Other Financial Institutions Act Chapter B3, LFN, 2004 of Nigeria lack provision to accommodate issues on Internet banking. However, under the general principles of legality in criminal law, and by the provisions of the Nigerian Constitution, a person can only be punished for conducts that have been defined to be criminal by written laws with the penalties specifically stated in the law. Although Nigeria has potent laws for the regulation of paper banking, these laws cannot be substituted for paperless transactions. This is because the issues involved in both transactions vary. The study also revealed that the absence of law in the cashless banking environment in Nigeria will subject consumers to endless risks. This study revealed that the creation of banking markets via the Internet relies on both available technologies and appropriate laws and regulations. It revealed however that Law of some of the countries considered on cashless banking has taken care of most of the legal issues and other problems prevalent in the cashless banking environment. The study also revealed some other problems prevalent in the Nigerian cashless banking environment. The study concluded that for Nigeria to find solutions to the legal issues raised in its cashless banking environment and other problems of cashless banking, it should have a viable legal Frame work for internet banking. The study concluded that the Central Bank of Nigeria’s Policy on Cashless banking is not potent enough to tackle the challenges posed to cashless banking in Nigeria because policies only have a persuasive effect and not a binding effect. There is, therefore, a need for appropriate Laws for the regulation of cashless Banking in Nigeria. The study also concluded that there is a need to create more awareness of the system among Nigerians and solve infrastructural problems like prevalent power outage which often have been creating internet network problem.Keywords: cashless-banking, Nigeria, policies, laws
Procedia PDF Downloads 489763 Challenges influencing Nurse Initiated Management of Retroviral Therapy (NIMART) Implementation in Ngaka Modiri Molema District, North West Province, South Africa
Authors: Sheillah Hlamalani Mboweni, Lufuno Makhado
Abstract:
Background: The increasing number of people who tested HIV positive and who demand antiretroviral therapy (ART) prompted the National Department of Health to adopt WHO recommendations of task shifting where Professional Nurses(PNs) initiate ART rather than doctors in the hospital. This resulted in the decentralization of services to primary health care(PHC), generating a need to capacitate PNs on NIMART. After years of training, the impact of NIMART was assessed where it was established that even though there was an increased number who accessed ART, the quality of care is of serious concern. The study aims to answer the following question: What are the challenges influencing NIMART implementation in primary health care. Objectives: This study explores challenges influencing NIMART training and implementation and makes recommendations to improve patient and HIV program outcomes. Methods: A qualitative explorative program evaluation research design. The study was conducted in the rural districts of North West province. Purposive sampling was used to sample PNs trained on NIMART. FGDs were used to collect data with 6-9 participants and data was analysed using ATLAS ti. Results: Five FGDs, n=28 PNs and three program managers were interviewed. The study results revealed two themes: inadequacy in NIMART training and the health care system challenges. Conclusion: The deficiency in NIMART training and health care system challenges is a public health concern as it compromises the quality of HIV management resulting in poor patients’ outcomes and retard the goal of ending the HIV epidemic. These should be dealt with decisively by all stakeholders. Recommendations: The national department of health should improve NIMART training and HIV management: standardization of NIMART training curriculum through the involvement of all relevant stakeholders skilled facilitators, the introduction of pre-service NIMART training in institutions of higher learning, support of PNs by district and program managers, plan on how to deal with the shortage of staff, negative attitude to ensure compliance to guidelines. There is a need to develop a conceptual framework that provides guidance and strengthens NIMART implementation in PHC facilities.Keywords: antiretroviral therapy, nurse initiated management of retroviral therapy, primary health care, professional nurses
Procedia PDF Downloads 158762 The Foucaultian Relationship between Power and Knowledge: Genealogy as a Method for Epistemic Resistance
Authors: Jana Soler Libran
Abstract:
The primary aim of this paper is to analyze the relationship between power and knowledge suggested in Michel Foucault's theory. Taking into consideration the role of power in knowledge production, the goal is to evaluate to what extent genealogy can be presented as a practical method for epistemic resistance. To do so, the methodology used consists of a revision of Foucault’s literature concerning the topic discussed. In this sense, conceptual analysis is applied in order to understand the effect of the double dimension of power on knowledge production. In its negative dimension, power is conceived as an organ of repression, vetoing certain instances of knowledge considered deceitful. In opposition, in its positive dimension, power works as an organ of the production of truth by means of institutionalized discourses. This double declination of power leads to the first main findings of the present analysis: no truth or knowledge can lie outside power’s action, and power is constituted through accepted forms of knowledge. To second these statements, Foucaultian discourse formations are evaluated, presenting external exclusion procedures as paradigmatic practices to demonstrate how power creates and shapes the validity of certain epistemes. Thus, taking into consideration power’s mechanisms to produce and reproduce institutionalized truths, this paper accounts for the Foucaultian praxis of genealogy as a method to reveal power’s intention, instruments, and effects in the production of knowledge. In this sense, it is suggested to consider genealogy as a practice which, firstly, reveals what instances of knowledge are subjugated to power and, secondly, promotes aforementioned peripherical discourses as a form of epistemic resistance. In order to counterbalance these main theses, objections to Foucault’s work from Nancy Fraser, Linda Nicholson, Charles Taylor, Richard Rorty, Alvin Goldman, or Karen Barad are discussed. In essence, the understanding of the Foucaultian relationship between power and knowledge is essential to analyze how contemporary discourses are produced by both traditional institutions and new forms of institutionalized power, such as mass media or social networks. Therefore, Michel Foucault's practice of genealogy is relevant, not only for its philosophical contribution as a method to uncover the effects of power in knowledge production but also because it constitutes a valuable theoretical framework for political theory and sociological studies concerning the formation of societies and individuals in the contemporary world.Keywords: epistemic resistance, Foucault’s genealogy, knowledge, power, truth
Procedia PDF Downloads 124761 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 74760 Italian Sign Language and Deafness in a North-Italian Border Region: Results of Research on the Linguistic Needs of Teachers and Students
Authors: Maria Tagarelli De Monte
Abstract:
In 2021, the passage of the law recognizing Italian Sign Language (LIS) as the language of the Italian deaf minority was the input for including this visual-gestural language in the curricula of interpreters and translators choosing the academic setting for their training. Yet, a gap remains concerning LIS education of teachers and communication assistants as referring figures for people who are deaf or hard of hearing in mainstream education. As well documented in the related scientific literature, deaf children often experience severe difficulties with the languages spoken in the country where they grow up, manifesting in all levels of literacy competence. In the research introduced here, the experience of deaf students (and their teachers) attending schools is explored in areas that are characterized by strong native bilingualism, such as Friuli-Venezia Giulia (FVG), facing Italian Northeast borders. This region is peculiar as the native population may be bilingual Italian and Friulian (50% of the local population), German, and/or Slovenian. The research involved all schools of all levels in Friuli to understand the relationship between the language skills expressed by teachers and those shown by deaf learners with a background in sign language. In addition to collecting specific information on the degree of preparation of teachers in deaf-related matters and LIS, the research has allowed to highlight the role, often poorly considered, covered by the communication assistants who work alongside deaf students. On several occasions, teachers and assistants were unanimous in affirming the importance of mutual collaboration and adequate consideration of the educational-rehabilitative history of the deaf child and her family. The research was based on a mixed method of structured questionnaires and semi-structured interviews with the referring teachers. As a result, a varied and complex framework emerged, showing an asymmetry in preparing personnel dedicated to the deaf learner. Considering how Italian education has long invested in creating an inclusive and accessible school system (i.e. with the "Ten Theses for Democratic Language Education"), a constructive analysis will complete the discussion in an attempt to understand how linguistic (and modal) differences can become levers of inclusion.Keywords: FVG, LIS, linguistic needs, deafness, teacher education, bilingual bimodal children, communication assistants, inclusion model
Procedia PDF Downloads 47