Search results for: grid architecture framework
832 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 145831 NFTs, between Opportunities and Absence of Legislation: A Study on the Effect of the Rulings of the OpenSea Case
Authors: Andrea Ando
Abstract:
The development of the blockchain has been a major innovation in the technology field. It opened the door to the creation of novel cyberassets and currencies. In more recent times, the non-fungible tokens have started to be at the centre of media attention. Their popularity has been increasing since 2021, and they represent the latest in the world of distributed ledger technologies and cryptocurrencies. It seems more and more likely that NFTs will play a more important role in our online interactions. They are indeed increasingly taking part in the arts and technology sectors. Their impact on society and the market is still very difficult to define, but it is very likely that there will be a turning point in the world of digital assets. There are some examples of their peculiar behaviour and effect in our contemporary tech-market: the former CEO of the famous social media site Twitter sold an NFT of his first tweet for around £2,1 million ($2,5 million), or the National Basketball Association has created a platform to sale unique moment and memorabilia from the history of basketball through the non-fungible token technology. Their growth, as imaginable, paved the way for civil disputes, mostly regarding their position under the current intellectual property law in each jurisdiction. In April 2022, the High Court of England and Wales ruled in the OpenSea case that non-fungible tokens can be considered properties. The judge, indeed, concluded that the cryptoasset had all the indicia of property under common law (National Provincial Bank v. Ainsworth). The research has demonstrated that the ruling of the High Court is not providing enough answers to the dilemma of whether minting an NFT is a violation or not of intellectual property and/or property rights. Indeed, if, on the one hand, the technology follows the framework set by the case law (e.g., the 4 criteria of Ainsworth), on the other hand, the question that arises is what is effectively protected and owned by both the creator and the purchaser. Then the question that arises is whether a person has ownership of the cryptographed code, that it is indeed definable, identifiable, intangible, distinct, and has a degree of permanence, or what is attached to this block-chain, hence even a physical object or piece of art. Indeed, a simple code would not have any financial importance if it were not attached to something that is widely recognised as valuable. This was demonstrated first through the analysis of the expectations of intellectual property law. Then, after having laid the foundation, the paper examined the OpenSea case, and finally, it analysed whether the expectations were met or not.Keywords: technology, technology law, digital law, cryptoassets, NFTs, NFT, property law, intellectual property law, copyright law
Procedia PDF Downloads 89830 The Impact of Task Type and Group Size on Dialogue Argumentation between Students
Authors: Nadia Soledad Peralta
Abstract:
Within the framework of socio-cognitive interaction, argumentation is understood as a psychological process that supports and induces reasoning and learning. Most authors emphasize the great potential of argumentation to negotiate with contradictions and complex decisions. So argumentation is a target for researchers who highlight the importance of social and cognitive processes in learning. In the context of social interaction among university students, different types of arguments are analyzed according to group size (dyads and triads) and the type of task (reading of frequency tables, causal explanation of physical phenomena, the decision regarding moral dilemma situations, and causal explanation of social phenomena). Eighty-nine first-year social sciences students of the National University of Rosario participated. Two groups were formed from the results of a pre-test that ensured the heterogeneity of points of view between participants. Group 1 consisted of 56 participants (performance in dyads, total: 28), and group 2 was formed of 33 participants (performance in triads, total: 11). A quasi-experimental design was performed in which effects of the two variables (group size and type of task) on the argumentation were analyzed. Three types of argumentation are described: authentic dialogical argumentative resolutions, individualistic argumentative resolutions, and non-argumentative resolutions. The results indicate that individualistic arguments prevail in dyads. That is, although people express their own arguments, there is no authentic argumentative interaction. Given that, there are few reciprocal evaluations and counter-arguments in dyads. By contrast, the authentically dialogical argument prevails in triads, showing constant feedback between participants’ points of view. It was observed that, in general, the type of task generates specific types of argumentative interactions. However, it is possible to emphasize that the authentically dialogic arguments predominate in the logical tasks, whereas the individualists or pseudo-dialogical are more frequent in opinion tasks. Nerveless, these relationships between task type and argumentative mode are best clarified in an interactive analysis based on group size. Finally, it is important to stress the value of dialogical argumentation in educational domains. Argumentative function not only allows a metacognitive reflection about their own point of view but also allows people to benefit from exchanging points of view in interactive contexts.Keywords: sociocognitive interaction, argumentation, university students, size of the grup
Procedia PDF Downloads 83829 Safety Climate Assessment and Its Impact on the Productivity of Construction Enterprises
Authors: Krzysztof J. Czarnocki, F. Silveira, E. Czarnocka, K. Szaniawska
Abstract:
Research background: Problems related to the occupational health and decreasing level of safety occur commonly in the construction industry. Important factor in the occupational safety in construction industry is scaffold use. All scaffolds used in construction, renovation, and demolition shall be erected, dismantled and maintained in accordance with safety procedure. Increasing demand for new construction projects unfortunately still is linked to high level of occupational accidents. Therefore, it is crucial to implement concrete actions while dealing with scaffolds and risk assessment in construction industry, the way on doing assessment and liability of assessment is critical for both construction workers and regulatory framework. Unfortunately, professionals, who tend to rely heavily on their own experience and knowledge when taking decisions regarding risk assessment, may show lack of reliability in checking the results of decisions taken. Purpose of the article: The aim was to indicate crucial parameters that could be modeling with Risk Assessment Model (RAM) use for improving both building enterprise productivity and/or developing potential and safety climate. The developed RAM could be a benefit for predicting high-risk construction activities and thus preventing accidents occurred based on a set of historical accident data. Methodology/Methods: A RAM has been developed for assessing risk levels as various construction process stages with various work trades impacting different spheres of enterprise activity. This project includes research carried out by teams of researchers on over 60 construction sites in Poland and Portugal, under which over 450 individual research cycles were carried out. The conducted research trials included variable conditions of employee exposure to harmful physical and chemical factors, variable levels of stress of employees and differences in behaviors and habits of staff. Genetic modeling tool has been used for developing the RAM. Findings and value added: Common types of trades, accidents, and accident causes have been explored, in addition to suitable risk assessment methods and criteria. We have found that the initial worker stress level is more direct predictor for developing the unsafe chain leading to the accident rather than the workload, or concentration of harmful factors at the workplace or even training frequency and management involvement.Keywords: safety climate, occupational health, civil engineering, productivity
Procedia PDF Downloads 318828 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 240827 “I” on the Web: Social Penetration Theory Revised
Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology
Abstract:
The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information
Procedia PDF Downloads 372826 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis
Authors: Serhat Tüzün, Tufan Demirel
Abstract:
Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review
Procedia PDF Downloads 279825 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)
Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara
Abstract:
Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry
Procedia PDF Downloads 175824 Transparency Obligations under the AI Act Proposal: A Critical Legal Analysis
Authors: Michael Lognoul
Abstract:
In April 2021, the European Commission released its AI Act Proposal, which is the first policy proposal at the European Union level to target AI systems comprehensively, in a horizontal manner. This Proposal notably aims to achieve an ecosystem of trust in the European Union, based on the respect of fundamental rights, regarding AI. Among many other requirements, the AI Act Proposal aims to impose several generic transparency obligationson all AI systems to the benefit of natural persons facing those systems (e.g. information on the AI nature of systems, in case of an interaction with a human). The Proposal also provides for more stringent transparency obligations, specific to AI systems that qualify as high-risk, to the benefit of their users, notably on the characteristics, capabilities, and limitations of the AI systems they use. Against that background, this research firstly presents all such transparency requirements in turn, as well as related obligations, such asthe proposed obligations on record keeping. Secondly, it focuses on a legal analysis of their scope of application, of the content of the obligations, and on their practical implications. On the scope of transparency obligations tailored for high-risk AI systems, the research notably notes that it seems relatively narrow, given the proposed legal definition of the notion of users of AI systems. Hence, where end-users do not qualify as users, they may only receive very limited information. This element might potentially raise concern regarding the objective of the Proposal. On the content of the transparency obligations, the research highlights that the information that should benefit users of high-risk AI systems is both very broad and specific, from a technical perspective. Therefore, the information required under those obligations seems to create, prima facie, an adequate framework to ensure trust for users of high-risk AI systems. However, on the practical implications of these transparency obligations, the research notes that concern arises due to potential illiteracy of high-risk AI systems users. They might not benefit from sufficient technical expertise to fully understand the information provided to them, despite the wording of the Proposal, which requires that information should be comprehensible to its recipients (i.e. users).On this matter, the research points that there could be, more broadly, an important divergence between the level of detail of the information required by the Proposal and the level of expertise of users of high-risk AI systems. As a conclusion, the research provides policy recommendations to tackle (part of) the issues highlighted. It notably recommends to broaden the scope of transparency requirements for high-risk AI systems to encompass end-users. It also suggests that principles of explanation, as they were put forward in the Guidelines for Trustworthy AI of the High Level Expert Group, should be included in the Proposal in addition to transparency obligations.Keywords: aI act proposal, explainability of aI, high-risk aI systems, transparency requirements
Procedia PDF Downloads 317823 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices
Authors: Alena Kulikova, Tatjana Kanonire
Abstract:
Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing
Procedia PDF Downloads 80822 Developing a Decision-Making Tool for Prioritizing Green Building Initiatives
Authors: Tayyab Ahmad, Gerard Healey
Abstract:
Sustainability in built environment sector is subject to many development constraints. Building projects are developed under different requirements of deliverables which makes each project unique. For an owner organization, i.e., a higher-education institution, involved in a significant building stock, it is important to prioritize some of the sustainability initiatives over the others in order to align the sustainable building development with organizational goals. The point-based green building rating tools i.e. Green Star, LEED, BREEAM are becoming increasingly popular and are well-acknowledged worldwide for verifying a sustainable development. It is imperative to synthesize a multi-criteria decision-making tool that can capitalize on the point-based methodology of rating systems while customizing the sustainable development of building projects according to the individual requirements and constraints of the client organization. A multi-criteria decision-making tool for the University of Melbourne is developed that builds on the action-learning and experience of implementing Green Buildings at the University of Melbourne. The tool evaluates the different sustainable building initiatives based on the framework of Green Star rating tool of Green Building Council of Australia. For each different sustainability initiative the decision-making tool makes an assessment based on at least five performance criteria including the ease with which a sustainability initiative can be achieved and the potential of a sustainability initiative to enhance project objectives, reduce life-cycle costs, enhance University’s reputation, and increase the confidence in quality construction. The use of a weighted aggregation mathematical model in the proposed tool can have a considerable role in the decision-making process of a Green Building project by indexing the Green Building initiatives in terms of organizational priorities. The index value of each initiative will be based on its alignment with some of the key performance criteria. The usefulness of the decision-making tool is validated by conducting structured interviews with some of the key stakeholders involved in the development of sustainable building projects at the University of Melbourne. The proposed tool is realized to help a client organization in deciding that within limited resources which sustainability initiatives and practices are more important to be pursued than others.Keywords: higher education institution, multi-criteria decision-making tool, organizational values, prioritizing sustainability initiatives, weighted aggregation model
Procedia PDF Downloads 234821 Campaigns of Youth Empowerment and Unemployment In Development Discourses: In the Case of Ethiopia
Abstract:
In today’s high decrement figure of the global economy, nations are facing many economic, social and political challenges; universally, there is high distress of food and other survival insecurity. Further, as a result of conflict, natural disasters, and leadership influences, youths are existentially less empowered and unemployed, especially in developing countries. With this situation to handle well challenges, it’s important to search, investigate and deliberate about youth, unemployment, empowerment and possible management fashions, as youths have the potential to carry and fight such battles. The method adopted is a qualitative analysis of secondary data sources in youth empowerment, unemployment and development as an inclusive framework. Youth unemployment is a major development headache for most African countries. In Ethiopia, following weak youth empowerment, youth unemployment has increased from time to time, and quality education and organization linkage matter as an important constraint. As a management challenge, although accessibility of quality education for Ethiopian youths is an important constraint, the country's youths are fortified deceptively and harassed in a vicious political challenge in their struggle to fetch social and economic changes in the country. Further, thousands of youths are inactivated, criminalized and lost their lives and this makes youths hopeless anger in their lives and pushes them further to be exposed for addictions, prostitution, violence, and illegitimate migrations. This youth challenge wasn’t only destined for African countries; rather, indeed, it was a global burden and headed as a global agenda. As a resolution, the construction of a healthy education system can create independent youths who acquire success and accelerate development. Developing countries should ensue development in the cultivation of empowerment tools through long and short-term education, implementing policy in action, diminishing wide-ranging gaps of (religion, ethnicity & region), and take high youth population as an opportunity and empower them. Further managing and empowering youths to be involved in decision-making, giving political weight and building a network of organizations to easily access job opportunities are important suggestions to save youths in work, for both increasing their income and the country's food security balance.Keywords: development, Ethiopia, management, unemployment, youth empowerment
Procedia PDF Downloads 59820 Euthanasia as a Case of Judicial Entrepreneurship in India: Analyzing the Role of the Supreme Court in the Policy Process of Euthanasia
Authors: Aishwarya Pothula
Abstract:
Euthanasia in India is a politically dormant policy issue in the sense that discussions around it are sporadic in nature (usually with developments in specific cases) and it stays as a dominant issue in the public domain for a fleeting period. In other words, it is a non-political issue that has been unable to successfully get on the policy agenda. This paper studies how the Supreme Court of India (SC) plays a role in euthanasia’s policy making. In 2011, the SC independently put a law in place that legalized passive euthanasia through its judgement in the Aruna Shanbaug v. Union of India case. According to this, it is no longer illegal to withhold/withdraw a patient’s medical treatment in certain cases. This judgement, therefore, is the empirical focus of this paper. The paper essentially employs two techniques of discourse analysis to study the SC’s system of argumentation. The two methods, Text Analysis using Gasper’s Analysis Table and Frame Analysis – are complemented by two discourse techniques called metaphor analysis and lexical analysis. The framework within which the analysis is conducted lies in 1) the judicial process of India, i.e. the SC procedures and the Constitutional rules and provisions, and 2) John W. Kingdon’s theory of policy windows and policy entrepreneurs. The results of this paper are three-fold: first, the SC dismiss the petitioner’s request for passive euthanasia on inadequate and weak grounds, thereby setting no precedent for the historic law they put in place. In other words, they leave the decision open for the Parliament to act upon. Hence the judgement, as opposed to arguments by many, is by no means an instance of judicial activism/overreach. Second, they define euthanasia in a way that resonates with existing broader societal themes. They combine this with a remarkable use of authoritative and protective tones/stances to settle at an intermediate position that balances the possible opposition to their role in the process and what they (perhaps) perceive to be an optimal solution. Third, they soften up the policy community (including the public) to the idea of passive euthanasia leading it towards a Parliamentarian legislation. They achieve this by shaping prevalent principles, provisions and worldviews through an astute use of the legal instruments at their disposal. This paper refers to this unconventional role of the SC as ‘judicial entrepreneurship’ which is also the first scholarly contribution towards research on euthanasia as a policy issue in India.Keywords: argumentation analysis, Aruna Ramachandra Shanbaug, discourse analysis, euthanasia, judicial entrepreneurship, policy-making process, supreme court of India
Procedia PDF Downloads 267819 Data Collection in Protected Agriculture for Subsequent Big Data Analysis: Methodological Evaluation in Venezuela
Authors: Maria Antonieta Erna Castillo Holly
Abstract:
During the last decade, data analysis, strategic decision making, and the use of artificial intelligence (AI) tools in Latin American agriculture have been a challenge. In some countries, the availability, quality, and reliability of historical data, in addition to the current data recording methodology in the field, makes it difficult to use information systems, complete data analysis, and their support for making the right strategic decisions. This is something essential in Agriculture 4.0. where the increase in the global demand for fresh agricultural products of tropical origin, during all the seasons of the year requires a change in the production model and greater agility in the responses to the consumer market demands of quality, quantity, traceability, and sustainability –that means extensive data-. Having quality information available and updated in real-time on what, how much, how, when, where, at what cost, and the compliance with production quality standards represents the greatest challenge for sustainable and profitable agriculture in the region. The objective of this work is to present a methodological proposal for the collection of georeferenced data from the protected agriculture sector, specifically in production units (UP) with tall structures (Greenhouses), initially for Venezuela, taking the state of Mérida as the geographical framework, and horticultural products as target crops. The document presents some background information and explains the methodology and tools used in the 3 phases of the work: diagnosis, data collection, and analysis. As a result, an evaluation of the process is carried out, relevant data and dashboards are displayed, and the first satellite maps integrated with layers of information in a geographic information system are presented. Finally, some improvement proposals and tentatively recommended applications are added to the process, understanding that their objective is to provide better qualified and traceable georeferenced data for subsequent analysis of the information and more agile and accurate strategic decision making. One of the main points of this study is the lack of quality data treatment in the Latin America area and especially in the Caribbean basin, being one of the most important points how to manage the lack of complete official data. The methodology has been tested with horticultural products, but it can be extended to other tropical crops.Keywords: greenhouses, protected agriculture, data analysis, geographic information systems, Venezuela
Procedia PDF Downloads 131818 Changes in Skin Microbiome Diversity According to the Age of Xian Women
Authors: Hanbyul Kim, Hye-Jin Kin, Taehun Park, Woo Jun Sul, Susun An
Abstract:
Skin is the largest organ of the human body and can provide the diverse habitat for various microorganisms. The ecology of the skin surface selects distinctive sets of microorganisms and is influenced by both endogenous intrinsic factors and exogenous environmental factors. The diversity of the bacterial community in the skin also depends on multiple host factors: gender, age, health status, location. Among them, age-related changes in skin structure and function are attributable to combinations of endogenous intrinsic factors and exogenous environmental factors. Skin aging is characterized by a decrease in sweat, sebum and the immune functions thus resulting in significant alterations in skin surface physiology including pH, lipid composition, and sebum secretion. The present study gives a comprehensive clue on the variation of skin microbiota and the correlations between ages by analyzing and comparing the metagenome of skin microbiome using Next Generation Sequencing method. Skin bacterial diversity and composition were characterized and compared between two different age groups: younger (20 – 30y) and older (60 - 70y) Xian, Chinese women. A total of 73 healthy women meet two conditions: (I) living in Xian, China; (II) maintaining healthy skin status during the period of this study. Based on Ribosomal Database Project (RDP) database, skin samples of 73 participants were enclosed with ten most abundant genera: Chryseobacterium, Propionibacterium, Enhydrobacter, Staphylococcus and so on. Although these genera are the most predominant genus overall, each genus showed different proportion in each group. The most dominant genus, Chryseobacterium was more present relatively in Young group than in an old group. Similarly, Propionibacterium and Enhydrobacter occupied a higher proportion of skin bacterial composition of the young group. Staphylococcus, in contrast, inhabited more in the old group. The beta diversity that represents the ratio between regional and local species diversity showed significantly different between two age groups. Likewise, The Principal Coordinate Analysis (PCoA) values representing each phylogenetic distance in the two-dimensional framework using the OTU (Operational taxonomic unit) values of the samples also showed differences between the two groups. Thus, our data suggested that the composition and diversification of skin microbiomes in adult women were largely affected by chronological and physiological skin aging.Keywords: next generation sequencing, age, Xian, skin microbiome
Procedia PDF Downloads 155817 Bandgap Engineering of CsMAPbI3-xBrx Quantum Dots for Intermediate Band Solar Cell
Authors: Deborah Eric, Abbas Ahmad Khan
Abstract:
Lead halide perovskites quantum dots have attracted immense scientific and technological interest for successful photovoltaic applications because of their remarkable optoelectronic properties. In this paper, we have simulated CsMAPbI3-xBrx based quantum dots to implement their use in intermediate band solar cells (IBSC). These types of materials exhibit optical and electrical properties distinct from their bulk counterparts due to quantum confinement. The conceptual framework provides a route to analyze the electronic properties of quantum dots. This layer of quantum dots optimizes the position and bandwidth of IB that lies in the forbidden region of the conventional bandgap. A three-dimensional MAPbI3 quantum dot (QD) with geometries including spherical, cubic, and conical has been embedded in the CsPbBr3 matrix. Bound energy wavefunction gives rise to miniband, which results in the formation of IB. If there is more than one miniband, then there is a possibility of having more than one IB. The optimization of QD size results in more IBs in the forbidden region. One band time-independent Schrödinger equation using the effective mass approximation with step potential barrier is solved to compute the electronic states. Envelope function approximation with BenDaniel-Duke boundary condition is used in combination with the Schrödinger equation for the calculation of eigen energies and Eigen energies are solved for the quasi-bound states using an eigenvalue study. The transfer matrix method is used to study the quantum tunneling of MAPbI3 QD through neighbor barriers of CsPbI3. Electronic states are computed using Schrödinger equation with effective mass approximation by considering quantum dot and wetting layer assembly. Results have shown the varying the quantum dot size affects the energy pinning of QD. Changes in the ground, first, second state energies have been observed. The QD is non-zero at the center and decays exponentially to zero at boundaries. Quasi-bound states are characterized by envelope functions. It has been observed that conical quantum dots have maximum ground state energy at a small radius. Increasing the wetting layer thickness exhibits energy signatures similar to bulk material for each QD size.Keywords: perovskite, intermediate bandgap, quantum dots, miniband formation
Procedia PDF Downloads 165816 Diverse High-Performing Teams: An Interview Study on the Balance of Demands and Resources
Authors: Alana E. Jansen
Abstract:
With such a large proportion of organisations relying on the use of team-based structures, it is surprising that so few teams would be classified as high-performance teams. While the impact of team composition on performance has been researched frequently, there have been conflicting findings as to the effects, particularly when examined alongside other team factors. To broaden the theoretical perspectives on this topic and potentially explain some of the inconsistencies in research findings left open by other various models of team effectiveness and high-performing teams, the present study aims to use the Job-Demands-Resources model, typically applied to burnout and engagement, as a framework to examine how team composition factors (particularly diversity in team member characteristics) can facilitate or hamper team effectiveness. This study used a virtual interview design where participants were asked to both rate and describe their experiences, in one high-performing and one low-performing team, over several factors relating to demands, resources, team composition, and team effectiveness. A semi-structured interview protocol was developed, which combined the use of the Likert style and exploratory questions. A semi-targeted sampling approach was used to invite participants ranging in age, gender, and ethnic appearance (common surface-level diversity characteristics) and those from different specialties, roles, educational and industry backgrounds (deep-level diversity characteristics). While the final stages of data analyses are still underway, thematic analysis using a grounded theory approach was conducted concurrently with data collection to identify the point of thematic saturation, resulting in 35 interviews being completed. Analyses examine differences in perceptions of demands and resources as they relate to perceived team diversity. Preliminary results suggest that high-performing and low-performing teams differ in perceptions of the type and range of both demands and resources. The current research is likely to offer contributions to both theory and practice. The preliminary findings suggest there is a range of demands and resources which vary between high and low-performing teams, factors which may play an important role in team effectiveness research going forward. Findings may assist in explaining some of the more complex interactions between factors experienced in the team environment, making further progress towards understanding the intricacies of why only some teams achieve high-performance status.Keywords: diversity, high-performing teams, job demands and resources, team effectiveness
Procedia PDF Downloads 187815 Early Modern Controversies of Mobility within the Spanish Empire: Francisco De Vitoria and the Peaceful Right to Travel
Authors: Beatriz Salamanca
Abstract:
In his public lecture ‘On the American Indians’ given at the University of Salamanca in 1538-39, Francisco de Vitoria presented an unsettling defense of freedom of movement, arguing that the Spanish had the right to travel and dwell in the New World, since it was considered part of the law of nations [ius gentium] that men enjoyed free mutual intercourse anywhere they went. The principle of freedom of movement brought hopeful expectations, promising to bring mankind together and strengthen the ties of fraternity. However, it led to polemical situations when those whose mobility was in question represented a harmful threat or was for some reason undesired. In this context, Vitoria’s argument has been seen on multiple occasions as a justification of the expansion of the Spanish empire. In order to examine the meaning of Vitoria’s defense of free mobility, a more detailed look at Vitoria’s text is required, together with the study of some of his earliest works, among them, his commentaries on Thomas Aquinas’s Summa Theologiae, where he presented relevant insights on the idea of the law of nations. In addition, it is necessary to place Vitoria’s work in the context of the intellectual tradition he belonged to and the responses he obtained from some of his contemporaries who were concerned with similar issues. The claim of this research is that the Spanish right to travel advocated by Vitoria was not intended to be interpreted in absolute terms, for it had to serve the purpose of bringing peace and unity among men, and could not contradict natural law. In addition, Vitoria explicitly observed that the right to travel was only valid if the Spaniards caused no harm, a condition that has been underestimated by his critics. Therefore, Vitoria’s legacy is of enormous value as it initiated a long lasting discussion regarding the question of the grounds under which human mobility could be restricted. Again, under Vitoria’s argument it was clear that this freedom was not absolute, but the controversial nature of his defense of Spanish mobility demonstrates how difficult it was and still is to address the issue of the circulation of peoples across frontiers, and shows the significance of this discussion in today’s globalized world, where the rights and wrongs of notions like immigration, international trade or foreign intervention still lack sufficient consensus. This inquiry about Vitoria’s defense of the principle of freedom of movement is being placed here against the background of the history of political thought, political theory, international law, and international relations, following the methodological framework of contextual history of the ‘Cambridge School’.Keywords: Francisco de Vitoria, freedom of movement, law of nations, ius gentium, Spanish empire
Procedia PDF Downloads 366814 A Method to Evaluate and Compare Web Information Extractors
Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman
Abstract:
Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.Keywords: web information extractors, information extraction evaluation method, Google scholar, web
Procedia PDF Downloads 248813 Tommy: Communication in Education about Disability
Authors: Karen V. Lee
Abstract:
The background and significance of this study involve communication in education by a faculty advisor exploring story and music that informs others about a disabled teacher. Social issues draw deep reflection about the emotional turmoil. As a musician becoming a teacher is a passionate yet complex endeavor, the faculty advisor shares a poetic but painful story about a disabled teacher being inducted into the teaching profession. The qualitative research method as theoretical framework draws on autoethnography of music and story where the faculty advisor approaches a professor for advice. His musicianship shifts her forward, backward, and sideways through feelings that evoke and provoke curriculum to remove communication barriers in education. They discover they do not transfer knowledge from educational method classes. Instead, the autoethnography embeds musical language as a metaphorical conduit for removing communication barriers in teacher education. Sub-themes involve communication barriers and educational technologies to ensure teachers receive social, emotional, physical, spiritual, and intervention disability resources that evoke visceral, emotional responses from the audience. Major findings of the study discover how autoethnography of music and story bring the authors to understand wider political issues of the practicum internship for teachers with disabilities. An epiphany reveals the irony of living in a culture of both uniformity and diversity. They explore the constructs of secrecy, ideology, abnormality, and marginalization by evoking visceral and emotional responses from the audience. As the voices harmonize plot, climax, characterization, and denouement, they dramatize meaning that is episodic yet incomplete to highlight the circumstances surrounding the disabled protagonist’s life. In conclusion, the qualitative research method argues for embracing storied experiences that depict communication in education. Scholarly significance embraces personal thoughts and feelings as a way of understanding social phenomena while highlighting the importance of removing communication barriers in education. The circumstance about a teacher with a disability is not uncommon in society. Thus, the authors resolve to removing barriers in education by using stories to transform the personal and cultural influences that provoke new ways of thinking about the curriculum for a disabled teacher.Keywords: communication in education, communication barriers, autoethnography, teaching
Procedia PDF Downloads 240812 The Effect of Realizing Emotional Synchrony with Teachers or Peers on Children’s Linguistic Proficiency: The Case Study of Uji Elementary School
Authors: Reiko Yamamoto
Abstract:
This paper reports on a joint research project in which a researcher in applied linguistics and elementary school teachers in Japan explored new ways to realize emotional synchrony in a classroom in childhood education. The primary purpose of this project was to develop a cross-curriculum of the first language (L1) and second language (L2) based on the concept of plurilingualism. This concept is common in Europe, and can-do statements are used in forming the standard of linguistic proficiency in any language; these are attributed to the action-oriented approach in the Common European Framework of Reference for Languages (CEFR). CEFR has a basic tenet of language education: improving communicative competence. Can-do statements are classified into five categories based on the tenet: reading, writing, listening, speaking/ interaction, and speaking/ speech. The first approach of this research was to specify the linguistic proficiency of the children, who are still developing their L1. Elementary school teachers brainstormed and specified the linguistic proficiency of the children as the competency needed to synchronize with others – teachers or peers – physically and mentally. The teachers formed original can-do statements in language proficiency on the basis of the idea that emotional synchrony leads to understanding others in communication. The research objectives are to determine the effect of language education based on the newly developed curriculum and can-do statements. The participants of the experiment were 72 third-graders in Uji Elementary School, Japan. For the experiment, 17 items were developed from the can-do statements formed by the teachers and divided into the same five categories as those of CEFR. A can-do checklist consisting of the items was created. The experiment consisted of three steps: first, the students evaluated themselves using the can-do checklist at the beginning of the school year. Second, one year of instruction was given to the students in Japanese and English classes (six periods a week). Third, the students evaluated themselves using the same can-do checklist at the end of the school year. The results of statistical analysis showed an enhancement of linguistic proficiency of the students. The average results of the post-check exceeded that of the pre-check in 12 out of the 17 items. Moreover, significant differences were shown in four items, three of which belonged to the same category: speaking/ interaction. It is concluded that children can get to understand others’ minds through physical and emotional synchrony. In particular, emotional synchrony is what teachers should aim at in childhood education.Keywords: elementary school education, emotional synchrony, language proficiency, sympathy with others
Procedia PDF Downloads 168811 A Quasi-Systematic Review on Effectiveness of Social and Cultural Sustainability Practices in Built Environment
Authors: Asif Ali, Daud Salim Faruquie
Abstract:
With the advancement of knowledge about the utility and impact of sustainability, its feasibility has been explored into different walks of life. Scientists, however; have established their knowledge in four areas viz environmental, economic, social and cultural, popularly termed as four pillars of sustainability. Aspects of environmental and economic sustainability have been rigorously researched and practiced and huge volume of strong evidence of effectiveness has been founded for these two sub-areas. For the social and cultural aspects of sustainability, dependable evidence of effectiveness is still to be instituted as the researchers and practitioners are developing and experimenting methods across the globe. Therefore, the present research aimed to identify globally used practices of social and cultural sustainability and through evidence synthesis assess their outcomes to determine the effectiveness of those practices. A PICO format steered the methodology which included all populations, popular sustainability practices including walkability/cycle tracks, social/recreational spaces, privacy, health & human services and barrier free built environment, comparators included ‘Before’ and ‘After’, ‘With’ and ‘Without’, ‘More’ and ‘Less’ and outcomes included Social well-being, cultural co-existence, quality of life, ethics and morality, social capital, sense of place, education, health, recreation and leisure, and holistic development. Search of literature included major electronic databases, search websites, organizational resources, directory of open access journals and subscribed journals. Grey literature, however, was not included. Inclusion criteria filtered studies on the basis of research designs such as total randomization, quasi-randomization, cluster randomization, observational or single studies and certain types of analysis. Studies with combined outcomes were considered but studies focusing only on environmental and/or economic outcomes were rejected. Data extraction, critical appraisal and evidence synthesis was carried out using customized tabulation, reference manager and CASP tool. Partial meta-analysis was carried out and calculation of pooled effects and forest plotting were done. As many as 13 studies finally included for final synthesis explained the impact of targeted practices on health, behavioural and social dimensions. Objectivity in the measurement of health outcomes facilitated quantitative synthesis of studies which highlighted the impact of sustainability methods on physical activity, Body Mass Index, perinatal outcomes and child health. Studies synthesized qualitatively (and also quantitatively) showed outcomes such as routines, family relations, citizenship, trust in relationships, social inclusion, neighbourhood social capital, wellbeing, habitability and family’s social processes. The synthesized evidence indicates slight effectiveness and efficacy of social and cultural sustainability on the targeted outcomes. Further synthesis revealed that such results of this study are due weak research designs and disintegrated implementations. If architects and other practitioners deliver their interventions in collaboration with research bodies and policy makers, a stronger evidence-base in this area could be generated.Keywords: built environment, cultural sustainability, social sustainability, sustainable architecture
Procedia PDF Downloads 401810 Reconceptualizing “Best Practices” in Public Sector
Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani
Abstract:
Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.Keywords: benchmarking, action research, critical realism, best practices, public sector
Procedia PDF Downloads 127809 Enhancing Thai In-Service Science Teachers' Technological Pedagogical Content Knowledge Integrating Local Context and Sufficiency Economy into Science Teaching
Authors: Siriwan Chatmaneerungcharoen
Abstract:
An emerging body of ‘21st century skills’-such as adaptability, complex communication skills, technology skills and the ability to solve non-routine problems--are valuable across a wide range of jobs in the national economy. Within the Thai context, a focus on the Philosophy of Sufficiency Economy is integrated into Science Education. Thai science education has advocated infusing 21st century skills and Philosophy of Sufficiency Economy into the school curriculum and several educational levels have launched such efforts. Therefore, developing science teachers to have proper knowledge is the most important factor to success of the goals. The purposes of this study were to develop 40 Cooperative Science teachers’ Technological Pedagogical Content Knowledge (TPACK) and to develop Professional Development Model integrated with Co-teaching Model and Coaching System (Co-TPACK). TPACK is essential to career development for teachers. Forty volunteer In-service teachers who were science cooperative teachers participated in this study for 2 years. Data sources throughout the research project consisted of teacher refection, classroom observations, Semi-structure interviews, Situation interview, questionnaires and document analysis. Interpretivist framework was used to analyze the data. Findings indicate that at the beginning, the teachers understood only the meaning of Philosophy of Sufficiency Economy but they did not know how to integrate the Philosophy of Sufficiency Economy into their science classrooms. Mostly, they preferred to use lecture based teaching and experimental teaching styles. While the Co- TPACK was progressing, the teachers had blended their teaching styles and learning evaluation methods. Co-TPACK consists of 3 cycles (Student Teachers’ Preparation Cycle, Cooperative Science Teachers Cycle, Collaboration cycle (Co-teaching, Co-planning, and Co-Evaluating and Coaching System)).The Co-TPACK enhances the 40 cooperative science teachers, student teachers and university supervisor to exchange their knowledge and experience on teaching science. There are many channels that they used for communication including online. They have used more Phuket context-integrated lessons, technology-integrated teaching and Learning that can explicit Philosophy of Sufficiency Economy. Their sustained development is shown in their lesson plans and teaching practices.Keywords: technological pedagogical content knowledge, philosophy of sufficiency economy, professional development, coaching system
Procedia PDF Downloads 464808 Examining Reading Comprehension Skills Based on Different Reading Comprehension Frameworks and Taxonomies
Authors: Seval Kula-Kartal
Abstract:
Developing students’ reading comprehension skills is an aim that is difficult to accomplish and requires to follow long-term and systematic teaching and assessment processes. In these processes, teachers need tools to provide guidance to them on what reading comprehension is and which comprehension skills they should develop. Due to a lack of clear and evidence-based frameworks defining reading comprehension skills, especially in Turkiye, teachers and students mostly follow various processes in the classrooms without having an idea about what their comprehension goals are and what those goals mean. Since teachers and students do not have a clear view of comprehension targets, strengths, and weaknesses in students’ comprehension skills, the formative feedback processes cannot be managed in an effective way. It is believed that detecting and defining influential comprehension skills may provide guidance both to teachers and students during the feedback process. Therefore, in the current study, some of the reading comprehension frameworks that define comprehension skills operationally were examined. The aim of the study is to develop a simple and clear framework that can be used by teachers and students during their teaching, learning, assessment, and feedback processes. The current study is qualitative research in which documents related to reading comprehension skills were analyzed. Therefore, the study group consisted of recourses and frameworks which made big contributions to theoretical and operational definitions of reading comprehension. A content analysis was conducted on the resources included in the study group. To determine the validity of the themes and sub-categories revealed as the result of content analysis, three educational assessment experts were asked to examine the content analysis results. The Fleiss’ Cappa coefficient revealed that there is consistency among themes and categories defined by three different experts. The content analysis of the reading comprehension frameworks revealed that comprehension skills could be examined under four different themes. The first and second themes focus on understanding information given explicitly or implicitly within a text. The third theme includes skills used by the readers to make connections between their personal knowledge and the information given in the text. Lastly, the fourth theme focus on skills used by readers to examine the text with a critical view. The results suggested that fundamental reading comprehension skills can be examined under four themes. Teachers are recommended to use these themes in their reading comprehension teaching and assessment processes. Acknowledgment: This research is supported by Pamukkale University Scientific Research Unit within the project, whose title is Developing A Reading Comprehension Rubric.Keywords: reading comprehension, assessing reading comprehension, comprehension taxonomies, educational assessment
Procedia PDF Downloads 82807 Integrated Coastal Management for the Sustainable Development of Coastal Cities: The Case of El-Mina, Tripoli, Lebanon
Authors: G. Ghamrawi, Y. Abunnasr, M. Fawaz, S. Yazigi
Abstract:
Coastal cities are constantly exposed to environmental degradation and economic regression fueled by rapid and uncontrolled urban growth as well as continuous resource depletion. This is the case of the City of Mina in Tripoli (Lebanon), where lack of awareness to preserve social, ecological, and historical assets, coupled with the increasing development pressures, are threatening the socioeconomic status of the city residents, the quality of life and accessibility to the coast. To address these challenges, a holistic coastal urban design and planning approach was developed to analyze the environmental, political, legal, and socioeconomic context of the city. This approach aims to investigate the potential of balancing urban development with the protection and enhancement of cultural, ecological, and environmental assets under an integrated coastal zone management approach (ICZM). The analysis of Mina's different sectors adopted several tools that include direct field observation, interviews with stakeholders, analysis of available data, historical maps, and previously proposed projects. The findings from the analysis were mapped and graphically represented, allowing the recognition of character zones that become the design intervention units. Consequently, the thesis proposes an urban, city-scale intervention that identifies 6 different character zones (the historical fishing port, Abdul Wahab island, the abandoned Port Said, Hammam el Makloub, the sand beach, and the new developable area) and proposes context-specific design interventions that capitalize on the main characteristics of each zone. Moreover, the intervention builds on the institutional framework of ICZM as well as other studies previously conducted for the coast and adopts nature-based solutions with hybrid systems for providing better environmental design solutions for developing the coast. This enables the realization of an all-inclusive, well-connected shoreline with easy and free access towards the sea; a developed shoreline with an active local economy, and an improved urban environment.Keywords: blue green infrastructure, coastal cities, hybrid solutions, integrated coastal zone management, sustainable development, urban planning
Procedia PDF Downloads 156806 Socio-cultural Influence on Teachers’ Preparedness for Inclusive Education: A Mixed Methods Study in the Nepalese Context
Authors: Smita Nepal
Abstract:
Despite being on the global education reform agenda for over two decades, interpretations and practices of inclusive education vary widely across the world. In Nepal, similar to many other developing countries, inclusive education is still an emerging concept and limited research is available to date in relation to how inclusive education is conceptualized and implemented here. Moreover, very little is known about how teachers who are at the frontline of providing inclusive education understand this concept and how well they are prepared to teach inclusively. This study addresses this research gap by investigating an overarching research question, ‘How prepared are Nepalese teachers to practice inclusive pedagogy?’ Different societies and cultures may have different interpretations of the concepts of diversity and inclusion. Acknowledging that such contextual differences influence how these issues are addressed, such as preparing teachers for providing inclusive education, this study has investigated the research questions using a sociocultural conceptual framework. A sequential mixed-method research design involved quantitative data from 203 survey responses collected in the first phase, followed by qualitative data in the second phase collected through semi-structured interviews with teachers. Descriptive analysis of the quantitative data and reflexive thematic analysis of the qualitative data revealed a narrow understanding of inclusive education in the participating Nepalese teachers with limited preparedness for implementing inclusive pedagogy. Their interpretation of inclusion substantially included the need for non-discrimination and the provision of equal opportunities. This interpretation was found to be influenced by the social context where a lack of a deep understanding of human diversity was reported, leading to discriminatory attitudes and practices. In addition, common norms established in society that experiencing privileges or disadvantages was normal for diverse groups of people appeared to have led to limited efforts to enhance teachers’ understanding of and preparedness for inclusive education. This study has significant implications, not only in the Nepalese context but globally, for reform in policies and practices and for strengthening the teacher education and professional development system to promote inclusion in education. In addition, the significance of this research lies in highlighting the importance of further context-specific research in this area to ensure inclusive education in a real sense by valuing socio-cultural differences.Keywords: inclusive education, inclusive pedagogy, sociocultural context, teacher preparation
Procedia PDF Downloads 71805 Policy Implications of Cashless Banking on Nigeria’s Economy
Authors: Oluwabiyi Adeola Ayodele
Abstract:
This study analysed the Policy and general issues that have arisen over time in Nigeria’ Cashless banking environment as a result of the lack of a Legal framework on Electronic banking in Nigeria. It undertook an in-depth study of the cashless banking system. It discussed the evolution, growth and development of cashless banking in Nigeria; It revealed the expected benefits of the cashless banking system; It appraised regulatory issues and other prevalent problems on cashless banking in Nigeria; and made appropriate recommendations where necessary. The study relied on primary and secondary sources of information. The primary sources included the Constitution of the Federal Republic of Nigeria, Statutes, Conventions and Judicial decisions, while the secondary sources included Books, Journals Articles, Newspapers and Internet Materials. The study revealed that cashless banking has been adopted in Nigeria but still at the developing stage. It revealed that there is no law for the regulation of cashless banking in Nigeria, what Nigeria relies on for regulation is the Central Bank of Nigeria’s Cashless Policy, 2014. The Banks and Other Financial Institutions Act Chapter B3, LFN, 2004 of Nigeria lack provision to accommodate issues on Internet banking. However, under the general principles of legality in criminal law, and by the provisions of the Nigerian Constitution, a person can only be punished for conducts that have been defined to be criminal by written laws with the penalties specifically stated in the law. Although Nigeria has potent laws for the regulation of paper banking, these laws cannot be substituted for paperless transactions. This is because the issues involved in both transactions vary. The study also revealed that the absence of law in the cashless banking environment in Nigeria will subject consumers to endless risks. This study revealed that the creation of banking markets via the Internet relies on both available technologies and appropriate laws and regulations. It revealed however that Law of some of the countries considered on cashless banking has taken care of most of the legal issues and other problems prevalent in the cashless banking environment. The study also revealed some other problems prevalent in the Nigerian cashless banking environment. The study concluded that for Nigeria to find solutions to the legal issues raised in its cashless banking environment and other problems of cashless banking, it should have a viable legal Frame work for internet banking. The study concluded that the Central Bank of Nigeria’s Policy on Cashless banking is not potent enough to tackle the challenges posed to cashless banking in Nigeria because policies only have a persuasive effect and not a binding effect. There is, therefore, a need for appropriate Laws for the regulation of cashless Banking in Nigeria. The study also concluded that there is a need to create more awareness of the system among Nigerians and solve infrastructural problems like prevalent power outage which often have been creating internet network problem.Keywords: cashless-banking, Nigeria, policies, laws
Procedia PDF Downloads 489804 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction
Authors: Yan Zhang
Abstract:
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.Keywords: Internet of Things, machine learning, predictive maintenance, streaming data
Procedia PDF Downloads 386803 Challenges influencing Nurse Initiated Management of Retroviral Therapy (NIMART) Implementation in Ngaka Modiri Molema District, North West Province, South Africa
Authors: Sheillah Hlamalani Mboweni, Lufuno Makhado
Abstract:
Background: The increasing number of people who tested HIV positive and who demand antiretroviral therapy (ART) prompted the National Department of Health to adopt WHO recommendations of task shifting where Professional Nurses(PNs) initiate ART rather than doctors in the hospital. This resulted in the decentralization of services to primary health care(PHC), generating a need to capacitate PNs on NIMART. After years of training, the impact of NIMART was assessed where it was established that even though there was an increased number who accessed ART, the quality of care is of serious concern. The study aims to answer the following question: What are the challenges influencing NIMART implementation in primary health care. Objectives: This study explores challenges influencing NIMART training and implementation and makes recommendations to improve patient and HIV program outcomes. Methods: A qualitative explorative program evaluation research design. The study was conducted in the rural districts of North West province. Purposive sampling was used to sample PNs trained on NIMART. FGDs were used to collect data with 6-9 participants and data was analysed using ATLAS ti. Results: Five FGDs, n=28 PNs and three program managers were interviewed. The study results revealed two themes: inadequacy in NIMART training and the health care system challenges. Conclusion: The deficiency in NIMART training and health care system challenges is a public health concern as it compromises the quality of HIV management resulting in poor patients’ outcomes and retard the goal of ending the HIV epidemic. These should be dealt with decisively by all stakeholders. Recommendations: The national department of health should improve NIMART training and HIV management: standardization of NIMART training curriculum through the involvement of all relevant stakeholders skilled facilitators, the introduction of pre-service NIMART training in institutions of higher learning, support of PNs by district and program managers, plan on how to deal with the shortage of staff, negative attitude to ensure compliance to guidelines. There is a need to develop a conceptual framework that provides guidance and strengthens NIMART implementation in PHC facilities.Keywords: antiretroviral therapy, nurse initiated management of retroviral therapy, primary health care, professional nurses
Procedia PDF Downloads 158