Search results for: alternative technical concepts
6512 The Impact of Introspective Models on Software Engineering
Authors: Rajneekant Bachan, Dhanush Vijay
Abstract:
The visualization of operating systems has refined the Turing machine, and current trends suggest that the emulation of 32 bit architectures will soon emerge. After years of technical research into Web services, we demonstrate the synthesis of gigabit switches, which embodies the robust principles of theory. Loam, our new algorithm for forward-error correction, is the solution to all of these challenges.Keywords: software engineering, architectures, introspective models, operating systems
Procedia PDF Downloads 5396511 An Integrated DANP-PROMETHEE II Approach for Air Traffic Controllers’ Workload Stress Problem
Authors: Jennifer Loar, Jason Montefalcon, Kissy Mae Alimpangog, Miriam Bongo
Abstract:
The demanding, professional roles that air traffic controllers (ATC) play in air transport operation provided the main motivation of this paper. As the controllers’ workload stress becomes more complex due to various stressors, the challenge to overcome these in the pursuit of improving the efficiency of controllers and safety level of aircrafts has been relevant. Therefore, in order to determine the main stressors and surface the best alternative, two widely-known multi-criteria decision-making (MCDM) methods, DANP and PROMETHEE II, are applied. The proposed method is demonstrated in a case study at Mactan Civil Aviation Authority of the Philippines (CAAP). The results showed that the main stressors are high air traffic volume, extraneous traffic, unforeseen events, limitations and reliability of equipment, noise/distracter, micro climate, bad posture, relations with supervisors and colleagues, private life conditions/relationships, and emotional conditions. In the outranking of alternatives, compartmentalization is believed to be the most preferred alternative to overcome controllers’ workload stress. This implies that compartmentalization can best be applied to reduce controller workload stress.Keywords: air traffic controller, DANP, MCDM, PROMETHEE II, workload stress
Procedia PDF Downloads 2726510 Structural Equation Modelling Based Approach to Integrate Customers and Suppliers with Internal Practices for Lean Manufacturing Implementation in the Indian Context
Authors: Protik Basu, Indranil Ghosh, Pranab K. Dan
Abstract:
Lean management is an integrated socio-technical system to bring about a competitive state in an organization. The purpose of this paper is to explore and integrate the role of customers and suppliers with the internal practices of the Indian manufacturing industries towards successful implementation of lean manufacturing (LM). An extensive literature survey is carried out. An attempt is made to build an exhaustive list of all the input manifests related to customers, suppliers and internal practices necessary for LM implementation, coupled with a similar exhaustive list of the benefits accrued from its successful implementation. A structural model is thus conceptualized, which is empirically validated based on the data from the Indian manufacturing sector. With the current impetus on developing the industrial sector, the Government of India recently introduced the Lean Manufacturing Competitiveness Scheme that aims to increase competitiveness with the help of lean concepts. There is a huge scope to enrich the Indian industries with the lean benefits, the implementation status being quite low. Hardly any survey-based empirical study in India has been found to integrate customers and suppliers with the internal processes towards successful LM implementation. This empirical research is thus carried out in the Indian manufacturing industries. The basic steps of the research methodology followed in this research are the identification of input and output manifest variables and latent constructs, model proposition and hypotheses development, development of survey instrument, sampling and data collection and model validation (exploratory factor analysis, confirmatory factor analysis, and structural equation modeling). The analysis reveals six key input constructs and three output constructs, indicating that these constructs should act in unison to maximize the benefits of implementing lean. The structural model presented in this paper may be treated as a guide to integrating customers and suppliers with internal practices to successfully implement lean. Integrating customers and suppliers with internal practices into a unified, coherent manufacturing system will lead to an optimum utilization of resources. This work is one of the very first researches to have a survey-based empirical analysis of the role of customers, suppliers and internal practices of the Indian manufacturing sector towards an effective lean implementation.Keywords: customer management, internal manufacturing practices, lean benefits, lean implementation, lean manufacturing, structural model, supplier management
Procedia PDF Downloads 1796509 Enhancing English Language Learning through Learners Cultural Background
Authors: A. Attahiru, Rabi Abdullahi Danjuma, Fatima Bint
Abstract:
Language and culture are two concepts which are closely related that one affects the other. This paper attempts to examine the definition of language and culture by discussing the relationship between them. The paper further presents some instructional strategies for the teaching of language and culture as well as the influence of culture on language. It also looks at its implication to language education and finally some recommendation and conclusion were drawn.Keywords: culture, language, relationship, strategies, teaching
Procedia PDF Downloads 4166508 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 1096507 Methodology of the Turkey’s National Geographic Information System Integration Project
Authors: Buse A. Ataç, Doğan K. Cenan, Arda Çetinkaya, Naz D. Şahin, Köksal Sanlı, Zeynep Koç, Akın Kısa
Abstract:
With its spatial data reliability, interpretation and questioning capabilities, Geographical Information Systems make significant contributions to scientists, planners and practitioners. Geographic information systems have received great attention in today's digital world, growing rapidly, and increasing the efficiency of use. Access to and use of current and accurate geographical data, which are the most important components of the Geographical Information System, has become a necessity rather than a need for sustainable and economic development. This project aims to enable sharing of data collected by public institutions and organizations on a web-based platform. Within the scope of the project, INSPIRE (Infrastructure for Spatial Information in the European Community) data specifications are considered as a road-map. In this context, Turkey's National Geographic Information System (TUCBS) Integration Project supports sharing spatial data within 61 pilot public institutions as complied with defined national standards. In this paper, which is prepared by the project team members in the TUCBS Integration Project, the technical process with a detailed methodology is explained. In this context, the main technical processes of the Project consist of Geographic Data Analysis, Geographic Data Harmonization (Standardization), Web Service Creation (WMS, WFS) and Metadata Creation-Publication. In this paper, the integration process carried out to provide the data produced by 61 institutions to be shared from the National Geographic Data Portal (GEOPORTAL), have been trying to be conveyed with a detailed methodology.Keywords: data specification, geoportal, GIS, INSPIRE, Turkish National Geographic Information System, TUCBS, Turkey's national geographic information system
Procedia PDF Downloads 1466506 Concepts of the Covid-19 Pandemic and the Implications of Vaccines for Health Security in Nigeria and Diasporas
Authors: Wisdom Robert Duruji
Abstract:
The outbreak of SARS-CoV-2 serotype infection was recorded in January 2020 in Wuhan City, Hubei Province, China. This study examines the concepts of the COVID-19 pandemic and the implications of vaccines for health security in Nigeria and Diasporas. It challenges the widely accepted assumption that the first case of coronavirus infection in Nigeria was recorded on February 27th, 2020, in Lagos. The study utilizes a range of research methods to achieve its objectives. These include the double-layered culture technique, literature review, website knowledge, Google search, news media information, academic journals, fieldwork, and on-site observations. These diverse methods allow for a comprehensive analysis of the concepts and the implications being studied. The study finds that coronavirus infection can be asymptomatic; it may be the antigenicity of the leukocytes (white blood cells), which produce immunogenic hapten or interferons (α, β and γ) that fight infectious parasites, was an immune response that prevented severe virulence in healthy individuals; the reason healthy patients of coronavirus infection in Nigeria naturally recovered after two to three weeks of on-set of infection and test negative. However, the fatality data from the Nigerian Centre for Disease Control (NCDC) is incorrect in this study’s finding; it perused that the fatalities were primarily due to underlying ailments, hunger, and malnutrition in debilitated, comorbid, or compromised patients. This study concluded that the kits and Polymerase Chain Reaction (PCR) machine currently used by the Nigerian Centre for Disease Control (NCDC) in testing and confirming COVID-19 in Nigeria is not ideal; it is programmed and negates separating the strain to its specific serotypes amongst its genera coronavirus, and family Coronaviridae; and might have confirmed patients with the symptoms of febrile caused by cough, catarrh, typhoid and malaria parasites as Covid-19 positive. Therefore, it is recommended that the coronavirus species infected in Nigeria are opportunistic parasites that thrive in human immuno-suppressed conditions like the herpesvirus; it cannot be eradicated by vaccines; the only virucides are interferons, immunoglobulins, and probably synthetic antiviral guanosine drugs like copegus or ribavirin. The findings emphasized that COVID-19 is not the primary pandemic disease in Nigeria; the lockdown was a mirage and not necessary; but rather, pandemic diseases in Nigeria are corruption, nepotism, hunger, and malnutrition caused by ineptitude in governance, religious dichotomy, and ethnic conflicts.Keywords: coronavirus, corruption, Covid-19 pandemic, lock-down, Nigeria, vaccine
Procedia PDF Downloads 696505 Epistemology in African Philosophy: A Critique of African Concept of Knowledge
Authors: Ovett Nwosimiri
Abstract:
African tradition and what it entails are the content of African concepts of knowledge. The study of African concepts of knowledge is also known as African epistemology. In other words, African epistemology is a branch of African philosophy that deals with knowledge. This branch of African philosophy engages with the nature and concept of knowledge, the ways in which knowledge can be gained, the ways in which one can justify an epistemic claim or validate a knowledge claim and the limit of human knowledge, etc. The protagonists of African epistemology based their argument for a distinctive or unique African epistemology on the premise or proposition “that each race is endowed with a distinctive nature and embodies in its civilization a particular spirit”. All human beings share some certain basic values and perceptions irrespective of where you come from, and this idea actually fosters some forms of interaction between people from different nationality. Africans like other people share in some certain values, perceptions, and interaction with the rest of the world. These basic values, perceptions, and interaction that Africans share with the rest of the word prompted African people to attempt to “modernize” their societies or develop some forms of their tradition in harmony with the ethos of the contemporary world. Based on the above ideas, it would be interesting to investigate if such (African) epistemology is still unique. The advocates of African epistemology focus on the externalist notion of justification and neglect the idea that both the internalist and externalist notion of justification are needed in order to arrive at a coherent and well-founded account of epistemic justification. Thus, this paper will critically examine the claims that there is a unique African epistemology (a mode of knowing that is peculiar to Africans, and that African mode of knowing is social, monism and situated notion of knowledge), and the grounds for justifying beliefs and epistemic claims.Keywords: internalist, externalist, knowledge, justification
Procedia PDF Downloads 2666504 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning
Authors: Xingyu Gao, Qiang Wu
Abstract:
Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.Keywords: patent influence, interpretable machine learning, predictive models, SHAP
Procedia PDF Downloads 506503 Growth Performance and Meat Quality of Cobb 500 Broilers Fed Phytase and Tannase Treated Sorghum-Based Diets
Authors: Magaya Rutendo P., Mutibvu Tonderai, Nyahangare emmanuel T., Ncube Sharai
Abstract:
This study aimed to evaluate the effects of phytase and tannase addition in broiler diets on growth performance and meat quality of broilers fed sorghum-based diets. Twelve experimental diets were formulated at three sorghum levels, which include 0, 50, and 100%, and 4 enzyme levels: No enzyme, 5000FTU phytase, 25TU tannase, and a combination of 5000FTU phytase plus 25TU tannase. Data on voluntary feed intake, average weekly weight gain and feed conversion ratio were recorded and used to assess growth performance. Meat technical and nutritional parameters were used to determine meat quality. Broilers fed total sorghum diets with phytase and tannase enzyme combination had the highest feed intake in the first (24.4 ± 0.04g/bird/day) and second weeks of life (23.0 ± 1.06g/bird/day), respectively. Complete sorghum diets with phytase (83.0 ± 0.88g/bird/day) and tannase (122.0 ± 0.88g/bird/day) showed the highest feed intake in the third and fourth weeks, respectively. Broilers fed 50% sorghum diets with tannase (135.3 ± 0.05g/bird/day) and complete maize diets with phytase (158.1 ± 0.88g/bird/day) had the highest feed intake during weeks five and six, respectively. Broilers fed a 50% sorghum diet without enzymes had the highest weight gain in the final week (606.5 ± 32.39g). Comparable feed conversion was observed in birds fed complete maize and 50% sorghum diets. Dietary treatment significantly influences the live body, carcass, liver, kidneys, abdominal fat pad weight, and intestinal length. However, it did not affect Pectoralis major meat nutritional and technical parameters.Keywords: feed efficiency, sorghum, carcass, exogenous enzymes
Procedia PDF Downloads 556502 The Challenge of Assessing Social AI Threats
Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi
Abstract:
The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.Keywords: social threats, artificial Intelligence, mitigation, social experiment
Procedia PDF Downloads 666501 Insertion of Photovoltaic Energy at Residential Level at Tegucigalpa and Comayagüela, Honduras
Authors: Tannia Vindel, Angel Matute, Erik Elvir, Kelvin Santos
Abstract:
Currently in Honduras, is been incentivized the generation of energy using renewable fonts, such as: hydroelectricity, wind power, biomass and, more recently with the strongest growth, photovoltaic energy. In July 2015 were installed 455.2 MW of photovoltaic energy, increasing by 24% the installed capacity of the national interconnected system existing in 2014, according the National Energy Company (NEC), that made possible reduce the thermoelectric dependency of the system. Given the good results of those large-scale photovoltaic plants, arises the question: is it interesting for the distribution utility and for the consumers the integration of photovoltaic systems in micro-scale in the urban and rural areas? To answer that question has been researched the insertion of photovoltaic energy in the residential sector in Tegucigalpa and Comayagüela (Central District), Honduras to determine the technical and economic viability. Francisco Morazán department, according the National Statistics Institute (NSI), in 2001 had more than 180,000 houses with power service. Tegucigalpa, department and Honduras capital, and Comayagüela, both, have the highest population density in the region, with 1,300,000 habitants in 2014 (NSI). The residential sector in the south-central region of Honduras represents a high percentage being 49% of total consumption, according with NEC in 2014; where 90% of this sector consumes in a range of 0 to 300 kWh / month. All this, in addition to the high level of losses in the transmission and distribution systems, 31.3% in 2014, and the availability of an annual average solar radiation of 5.20 kWh/(m2∙day) according to the NASA, suggests the feasibility of the implementation of photovoltaic systems as a solution to give a level of independency to the households, and besides could be capable of injecting the non-used energy to the grid. The capability of exchange of energy with the grid could make the photovoltaic systems acquisition more affordable to the consumers, because of the compensation energy programs or other kinds of incentives that could be created. Technical viability of the photovoltaic systems insertion has been analyzed, considering the solar radiation monthly average to determine the monthly average of energy that would be generated with the technology accessible locally and the effects of the injection of the energy locally generated on the grid. In addition, the economic viability has been analyzed too, considering the photovoltaic systems high costs, costs of the utility, location and monthly energy consumption requirements of the families. It was found that the inclusion of photovoltaic systems in Tegucigalpa and Comayagüela could decrease in 6 MW the demand for the region if 100% of the households use photovoltaic systems, which acquisition may be more accessible with the help of government incentives and/or the application of energy exchange programs.Keywords: grid connected, photovoltaic, residential, technical analysis
Procedia PDF Downloads 2656500 Evaluation of the Mechanical Behavior of a Retaining Wall Structure on a Weathered Soil through Probabilistic Methods
Authors: P. V. S. Mascarenhas, B. C. P. Albuquerque, D. J. F. Campos, L. L. Almeida, V. R. Domingues, L. C. S. M. Ozelim
Abstract:
Retaining slope structures are increasingly considered in geotechnical engineering projects due to extensive urban cities growth. These kinds of engineering constructions may present instabilities over the time and may require reinforcement or even rebuilding of the structure. In this context, statistical analysis is an important tool for decision making regarding retaining structures. This study approaches the failure probability of the construction of a retaining wall over the debris of an old and collapsed one. The new solution’s extension length will be of approximately 350 m and will be located over the margins of the Lake Paranoá, Brasilia, in the capital of Brazil. The building process must also account for the utilization of the ruins as a caisson. A series of in situ and laboratory experiments defined local soil strength parameters. A Standard Penetration Test (SPT) defined the in situ soil stratigraphy. Also, the parameters obtained were verified using soil data from a collection of masters and doctoral works from the University of Brasília, which is similar to the local soil. Initial studies show that the concrete wall is the proper solution for this case, taking into account the technical, economic and deterministic analysis. On the other hand, in order to better analyze the statistical significance of the factor-of-safety factors obtained, a Monte Carlo analysis was performed for the concrete wall and two more initial solutions. A comparison between the statistical and risk results generated for the different solutions indicated that a Gabion solution would better fit the financial and technical feasibility of the project.Keywords: economical analysis, probability of failure, retaining walls, statistical analysis
Procedia PDF Downloads 4066499 Meanings and Concepts of Standardization in Systems Medicine
Authors: Imme Petersen, Wiebke Sick, Regine Kollek
Abstract:
In systems medicine, high-throughput technologies produce large amounts of data on different biological and pathological processes, including (disturbed) gene expressions, metabolic pathways and signaling. The large volume of data of different types, stored in separate databases and often located at different geographical sites have posed new challenges regarding data handling and processing. Tools based on bioinformatics have been developed to resolve the upcoming problems of systematizing, standardizing and integrating the various data. However, the heterogeneity of data gathered at different levels of biological complexity is still a major challenge in data analysis. To build multilayer disease modules, large and heterogeneous data of disease-related information (e.g., genotype, phenotype, environmental factors) are correlated. Therefore, a great deal of attention in systems medicine has been put on data standardization, primarily to retrieve and combine large, heterogeneous datasets into standardized and incorporated forms and structures. However, this data-centred concept of standardization in systems medicine is contrary to the debate in science and technology studies (STS) on standardization that rather emphasizes the dynamics, contexts and negotiations of standard operating procedures. Based on empirical work on research consortia that explore the molecular profile of diseases to establish systems medical approaches in the clinic in Germany, we trace how standardized data are processed and shaped by bioinformatics tools, how scientists using such data in research perceive such standard operating procedures and which consequences for knowledge production (e.g. modeling) arise from it. Hence, different concepts and meanings of standardization are explored to get a deeper insight into standard operating procedures not only in systems medicine, but also beyond.Keywords: data, science and technology studies (STS), standardization, systems medicine
Procedia PDF Downloads 3426498 Applying the CA Systems in Education Process
Authors: A. Javorova, M. Matusova, K. Velisek
Abstract:
The article summarizes the experience of laboratory technical subjects teaching methodologies using a number of software products. The main aim is to modernize the teaching process in accordance with the requirements of today - based on information technology. Increasing of the study attractiveness and effectiveness is due to the introduction of CA technologies in the learning process. This paper discussed the areas where individual CA system used. Environment using CA systems are briefly presented in each chapter.Keywords: education, CA systems, simulation, technology
Procedia PDF Downloads 3976497 Influence of Infinite Elements in Vibration Analysis of High-Speed Railway Track
Authors: Janaki Rama Raju Patchamatla, Emani Pavan Kumar
Abstract:
The idea of increasing the existing train speeds and introduction of the high-speed trains in India as a part of Vision-2020 is really challenging from both economic viability and technical feasibility. More than economic viability, technical feasibility has to be thoroughly checked for safe operation and execution. Trains moving at high speeds need a well-established firm and safe track thoroughly tested against vibration effects. With increased speeds of trains, the track structure and layered soil-structure interaction have to be critically assessed for vibration and displacements. Physical establishment of track, testing and experimentation is a costly and time taking process. Software-based modelling and simulation give relatively reliable, cost-effective means of testing effects of critical parameters like sleeper design and density, properties of track and sub-grade, etc. The present paper reports the applicability of infinite elements in reducing the unrealistic stress-wave reflections from so-called soil-structure interface. The influence of the infinite elements is quantified in terms of the displacement time histories of adjoining soil and the deformation pattern in general. In addition, the railhead response histories at various locations show that the numerical model is realistic without any aberrations at the boundaries. The numerical model is quite promising in its ability to simulate the critical parameters of track design.Keywords: high speed railway track, finite element method, Infinite elements, vibration analysis, soil-structure interface
Procedia PDF Downloads 2726496 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue
Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov
Abstract:
The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport
Procedia PDF Downloads 1156495 Optimization of SOL-Gel Copper Oxide Layers for Field-Effect Transistors
Authors: Tomas Vincze, Michal Micjan, Milan Pavuk, Martin Weis
Abstract:
In recent years, alternative materials are gaining attention to replace polycrystalline and amorphous silicon, which are a standard for low requirement devices, where silicon is unnecessarily and high cost. For that reason, metal oxides are envisioned as the new materials for these low-requirement applications such as sensors, solar cells, energy storage devices, or field-effect transistors. Their most common way of layer growth is sputtering; however, this is a high-cost fabrication method, and a more industry-suitable alternative is the sol-gel method. In this group of materials, many oxides exhibit a semiconductor-like behavior with sufficiently high mobility to be applied as transistors. The sol-gel method is a cost-effective deposition technique for semiconductor-based devices. Copper oxides, as p-type semiconductors with free charge mobility up to 1 cm2/Vs., are suitable replacements for poly-Si or a-Si:H devices. However, to reach the potential of silicon devices, a fine-tuning of material properties is needed. Here we focus on the optimization of the electrical parameters of copper oxide-based field-effect transistors by modification of precursor solvent (usually 2-methoxy ethanol). However, to achieve solubility and high-quality films, a better solvent is required. Since almost no solvents have both high dielectric constant and high boiling point, an alternative approach was proposed with blend solvents. By mixing isopropyl alcohol (IPA) and 2-methoxy ethanol (2ME) the precursor reached better solubility. The quality of the layers fabricated using mixed solutions was evaluated in accordance with the surface morphology and electrical properties. The IPA:2ME solution mixture reached optimum results for the weight ratio of 1:3. The cupric oxide layers for optimal mixture had the highest crystallinity and highest effective charge mobility.Keywords: copper oxide, field-effect transistor, semiconductor, sol-gel method
Procedia PDF Downloads 1366494 Performance Evaluation of Various Displaced Left Turn Intersection Designs
Authors: Hatem Abou-Senna, Essam Radwan
Abstract:
With increasing traffic and limited resources, accommodating left-turning traffic has been a challenge for traffic engineers as they seek balance between intersection capacity and safety; these are two conflicting goals in the operation of a signalized intersection that are mitigated through signal phasing techniques. Hence, to increase the left-turn capacity and reduce the delay at the intersections, the Florida Department of Transportation (FDOT) moves forward with a vision of optimizing intersection control using innovative intersection designs through the Transportation Systems Management & Operations (TSM&O) program. These alternative designs successfully eliminate the left-turn phase, which otherwise reduces the conventional intersection’s (CI) efficiency considerably, and divide the intersection into smaller networks that would operate in a one-way fashion. This study focused on the Crossover Displaced Left-turn intersections (XDL), also known as Continuous Flow Intersections (CFI). The XDL concept is best suited for intersections with moderate to high overall traffic volumes, especially those with very high or unbalanced left turn volumes. There is little guidance on determining whether partial XDL intersections are adequate to mitigate the overall intersection condition or full XDL is always required. The primary objective of this paper was to evaluate the overall intersection performance in the case of different partial XDL designs compared to a full XDL. The XDL alternative was investigated for 4 different scenarios; partial XDL on the east-west approaches, partial XDL on the north-south approaches, partial XDL on the north and east approaches and full XDL on all 4 approaches. Also, the impact of increasing volume on the intersection performance was considered by modeling the unbalanced volumes with 10% increment resulting in 5 different traffic scenarios. The study intersection, located in Orlando Florida, is experiencing recurring congestion in the PM peak hour and is operating near capacity with volume to a capacity ratio closer to 1.00 due to the presence of two heavy conflicting movements; southbound and westbound. The results showed that a partial EN XDL alternative proved to be effective and compared favorably to a full XDL alternative followed by the partial EW XDL alternative. The analysis also showed that Full, EW and EN XDL alternatives outperformed the NS XDL and the CI alternatives with respect to the throughput, delay and queue lengths. Significant throughput improvements were remarkable at the higher volume level with percent increase in capacity of 25%. The percent reduction in delay for the critical movements in the XDL scenarios compared to the CI scenario ranged from 30-45%. Similarly, queue lengths showed percent reduction in the XDL scenarios ranging from 25-40%. The analysis revealed how partial XDL design can improve the overall intersection performance at various demands, reduce the costs associated with full XDL and proved to outperform the conventional intersection. However, partial XDL serving low volumes or only one of the critical movements while other critical movements are operating near or above capacity do not provide significant benefits when compared to the conventional intersection.Keywords: continuous flow intersections, crossover displaced left-turn, microscopic traffic simulation, transportation system management and operations, VISSIM simulation model
Procedia PDF Downloads 3116493 Catalytic Hydrothermal Decarboxylation of Lipid from Activated Sludge for Renewable Diesel Production
Authors: Ifeanyichukwu Edeh, Tim Overton, Steve Bowra
Abstract:
Currently biodiesel is produced from plant oils or animal’s fats by a liquid-phase catalysed transesterification process at low temperature. Although biodiesel is renewable and to a large extent sustainable, inherent properties such as poor cold flow, low oxidation stability, low cetane value restrict application to blends with fossil fuels. An alternative to biodiesel is renewable diesel produced by catalytic hydrotreating of oils and fats and is considered a drop in fuel because its properties are similar to petroleum diesel. In addition to developing alternative productions routes there is continued interest in reducing the cost of the feed stock, waste cooking oils and fats are increasingly used as the feedstocks due to low cost. However, use of oils and fat are highly adulterated resulting in high free fatty acid content which turn impacts on the efficiency of FAME production. Therefore, in light of the need to develop, alternative lipid feed stocks and related efficient catalysis the present study investigates the potential of producing renewable diesel from the lipids-extracted from activated sludge, a waste water treatment by-product, through catalytic hydrothermal decarboxylation. The microbial lipids were first extracted from the activated sludge using the Folch et al method before hydrothermal decarboxylation reactions were carried out using palladium (Pd/C) and platinum (Pt/C) on activated carbon as the catalysts in a batch reactor. The impact of three temperatures 290, 300, 330 °C and residence time between 30 min and 4hrs was assessed. At the end of the reaction, the products were recovered using organic solvents and characterized using gas chromatography (GC). The principle products of the reaction were pentadecane and heptadecane. The highest yields of pentadecane and heptadecane from lipid-extract were 23.23% and 15.21%, respectively. These yields were obtained at 290 °C and residence time 1h using Pt/C. To the best of our knowledge, the current work is the first investigation on the hydrothermal decarboxylation of lipid-extract from activated sludge.Keywords: activated sludge, lipid, hydrothermal decarboxylation, renewable diesel
Procedia PDF Downloads 3196492 The Search for an Alternative to Tabarru` in Takaful Models
Authors: Abu Umar Faruq Ahmad, Muhammad Ayub
Abstract:
Tabarru` (unilateral gratuitous contribution) is thought to be the basic concept that distinguishes Takaful from conventional non-Sharīʿah compliant insurance. The Sharīʿah compliance of its current practice has been questioned in the premise that, a) it is a form of commutative contract; b) it is akin to the commercial corporate structure of insurance companies due to following the same marketing strategies, allocation to reserves, sharing of underwriting surplus by the companies one way or the other, providing loans to the Takaful funds, and resultantly absorbing the underwriting losses. The Sharīʿah scholars are of the view that the relationship between participants in Takaful should be in the form of commitment to donate, under which a contributor makes commitments himself to donate a sum of money for mutual help and cooperation on the condition that the balance, if any, should be returned to him. With the aim of finding solutions to the above mentioned concerns and other Sharīʿah related issues the study seeks to investigate whether the Takaful companies are functioning in accordance with the Islamic principles of brotherhood, solidarity, and cooperative risk sharing. Given that it discusses the cooperative model of Takaful to address the current and future Sharīʿah related and legal concerns. The study proposed an alternative model and considers it to best serve the objectives of Takaful which operates on the basis of ta`awun or mutual co-operation.Keywords: hibah, musharakah ta`awuniyyah, Tabarru`, Takaful
Procedia PDF Downloads 4466491 Assessing Female Students' Understanding of the Solar System Concepts by Implementing I-Cube Technology
Authors: Elham Ghazi Mohammad
Abstract:
This study examined the female students’ understanding for the solar system concepts through the utilization of the I-Cube technology as a virtual reality technology. The study conducted in Qatar University for samples of students of eighth and ninth preparatory grade students in the State of Qatar. The research framework comprises designated quantitative research designs and methods of data collection and analysis including pre- and post-conceptual exams. This research based on experimental method design that focuses on students’ performance and conceptual questions. A group of 120 students from the eighth and ninth groups were divided into two pools of 60 students each, where the two 60-student groups represent the designated control and treatment groups. It must be mentioned that the students were selected randomly from the eighth and ninth grades. The solar system lesson of interest was taught by teacher candidates (senior students at the college of Education at QU), who taught both the experimental group (integrating I-cube) in virtual lab in Qatar University and control group (without integrating this technology) in one of independent school in the State of Qatar. It is noteworthy to mention that the students usually face some difficulties to learn by imagining real situation such as solar system and inner planet lesson. Collected data was statistically analyzed using one-way ANOVA and one-way ANCOVA using SPSS Statistics. The obtained results revealed that integrating I-Cube technology has significantly enhanced female students’ conceptual understanding of the solar system. Interestingly, our findings demonstrated the applicability of utilizing integrating I-Cube technology toward enhancing the students’ understanding regarding subjects of interests within the landscapes of basic sciences.Keywords: virtual lab, integrating technology, I-Cube, solar system
Procedia PDF Downloads 2416490 Students’ Post COVID-19 Experiences with E-Learning Platforms among Undergraduate Students of Public Universities in the Ashanti Region, Ghana
Authors: Michael Oppong, Stephanie Owusu Ansah, Daniel Ofori
Abstract:
The study investigated students’ post-covid-19 experiences with e-learning platforms among undergraduate students of public universities in the Ashanti region of Ghana. The study respectively drew 289 respondents from two public universities, i.e., Kwame Nkrumah University of Science and Technology (KNUST) Business School and the Kumasi Technical University (KsTU) Business School in Ghana. Given that the population from the two public universities was fairly high, sampling had to be done. The overall population of the study was 480 students randomly sampled from the two public universities using the sampling ratio given by Alreck and Settle (2004). The population constituted 360 students from the Kwame Nkrumah University of Science and Technology (KNUST) Business School and 120 from the Kumasi Technical University Business School (KsTU). The study employed questionnaires as a data collection tool. The data gathered were 289 responses out of 480 questionnaires administered, representing 60.2%. The data was analyzed using pie charts, bar charts, percentages, and line graphs. Findings revealed that the e-learning platforms were still useful. However, the students used it on a weekly basis post-COVID-19, unlike in the COVID-19 era, where it was used daily. All other academic activities, with the exception of examinations, are still undertaken on the e-learning platforms; however, it is underutilized in the post-COVID-19 experience. The study recommends that universities should invest in infrastructure development to enable all academic activities, most especially examinations, to be undertaken using the e-learning platforms to curtail future challenges.Keywords: e-learning platform, undergraduate students, post-COVID-19 experience, public universities
Procedia PDF Downloads 1046489 Islamic Financial Instrument, Standard Parallel Salam as an Alternative to Conventional Derivatives
Authors: Alireza Naserpoor
Abstract:
Derivatives are the most important innovation which has happened in the past decades. When it comes to financial markets, it has changed the whole way of operations of stock, commodities and currency market. Beside a lot of advantages, Conventional derivatives contracts have some disadvantages too. Some problems have been caused by derivatives contain raising Volatility, increasing Bankruptcies and causing financial crises. Standard Parallel Salam contract as an Islamic financial product meanwhile is a financing instrument can be used for risk management by investors. Standard Parallel Salam is a Shari’ah-Compliant contract. Furthermore, it is an alternative to conventional derivatives. Despite the fact that the unstructured types of that, has been used in several Islamic countries, This contract as a structured and standard financial instrument introduced in Iran Mercantile Exchange in 2014. In this paper after introducing parallel Salam, we intend to examine a collection of international experience and local measure regarding launching standard parallel Salam contract and proceed to describe standard scenarios for trading this instrument and practical experience in Iran Mercantile Exchange about this instrument. Afterwards, we make a comparison between SPS and Futures contracts as a conventional derivative. Standard parallel salam contract as an Islamic financial product, can be used for risk management by investors. SPS is a Shariah-Compliant contract. Furthermore it is an alternative to conventional derivatives. This contract as a structured and standard financial instrument introduced in Iran Mercantile Exchange in 2014. despite the fact that the unstructured types of that, has been used in several Islamic countries. In this article after introducing parallel salam, we intend to examine a collection of international experience and local measure regarding launching standard parallel salam contract and proceed to describe standard scenarios for trading this instrument containing two main approaches in SPS using, And practical experience in IME about this instrument Afterwards, a comparison between SPS and Futures contracts as a conventional derivatives.Keywords: futures contracts, hedging, shari’ah compliant instruments, standard parallel salam
Procedia PDF Downloads 3926488 Open Innovation Laboratory for Rapid Realization of Sensing, Smart and Sustainable Products (S3 Products) for Higher Education
Authors: J. Miranda, D. Chavarría-Barrientos, M. Ramírez-Cadena, M. E. Macías, P. Ponce, J. Noguez, R. Pérez-Rodríguez, P. K. Wright, A. Molina
Abstract:
Higher education methods need to evolve because the new generations of students are learning in different ways. One way is by adopting emergent technologies, new learning methods and promoting the maker movement. As a result, Tecnologico de Monterrey is developing Open Innovation Laboratories as an immediate response to educational challenges of the world. This paper presents an Open Innovation Laboratory for Rapid Realization of Sensing, Smart and Sustainable Products (S3 Products). The Open Innovation Laboratory is composed of a set of specific resources where students and teachers use them to provide solutions to current problems of priority sectors through the development of a new generation of products. This new generation of products considers the concepts Sensing, Smart, and Sustainable. The Open Innovation Laboratory has been implemented in different courses in the context of New Product Development (NPD) and Integrated Manufacturing Systems (IMS) at Tecnologico de Monterrey. The implementation consists of adapting this Open Innovation Laboratory within the course’s syllabus in combination with the implementation of specific methodologies for product development, learning methods (Active Learning and Blended Learning using Massive Open Online Courses MOOCs) and rapid product realization platforms. Using the concepts proposed it is possible to demonstrate that students can propose innovative and sustainable products, and demonstrate how the learning process could be improved using technological resources applied in the higher educational sector. Finally, examples of innovative S3 products developed at Tecnologico de Monterrey are presented.Keywords: active learning, blended learning, maker movement, new product development, open innovation laboratory
Procedia PDF Downloads 3956487 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 2316486 An Empirical Investigation on the Dynamics of Knowledge and IT Industries in Korea
Authors: Sang Ho Lee, Tae Heon Moon, Youn Taik Leem, Kwang Woo Nam
Abstract:
Knowledge and IT inputs to other industrial production have become more important as a key factor for the competitiveness of national and regional economies, such as knowledge economies in smart cities. Knowledge and IT industries lead the industrial innovation and technical (r)evolution through low cost, high efficiency in production, and by creating a new value chain and new production path chains, which is referred as knowledge and IT dynamics. This study aims to investigate the knowledge and IT dynamics in Korea, which are analyzed through the input-output model and structural path analysis. Twenty-eight industries were reclassified into seven categories; Agriculture and Mining, IT manufacture, Non-IT manufacture, Construction, IT-service, Knowledge service, Non-knowledge service to take close look at the knowledge and IT dynamics. Knowledge and IT dynamics were analyzed through the change of input output coefficient and multiplier indices in terms of technical innovation, as well as the changes of the structural paths of the knowledge and IT to other industries in terms of new production value creation from 1985 and 2010. The structural paths of knowledge and IT explain not only that IT foster the generation, circulation and use of knowledge through IT industries and IT-based service, but also that knowledge encourages IT use through creating, sharing and managing knowledge. As a result, this paper found the empirical investigation on the knowledge and IT dynamics of the Korean economy. Knowledge and IT has played an important role regarding the inter-industrial transactional input for production, as well as new industrial creation. The birth of the input-output production path has mostly originated from the knowledge and IT industries, while the death of the input-output production path took place in the traditional industries from 1985 and 2010. The Korean economy has been in transition to a knowledge economy in the Smart City.Keywords: knowledge and IT industries, input-output model, structural path analysis, dynamics of knowledge and it, knowledge economy, knowledge city and smart city
Procedia PDF Downloads 3346485 Professionals’ Learning from Casework in Child Protection: The View from Within
Authors: Jude Harrison
Abstract:
Child protection is a complex and sensitive practice. The core responsibility is the care and protection of children and young people who have been subject to or who are at risk from abuse and neglect. The work involves investigating allegations of harm, preparing for and making representations to the legal system, and case planning and management across a continuum of complicated care interventions. Professionals’ learning for child protection practice is evident in a range of literature investigating multiple learning processes such as university preparation, student placements, professional supervision, training, and other post-qualifying professional development experiences at work. There is, however, very limited research into how caseworkers learn in and through their daily practice. Little is known, therefore, about how learning at work unfolds for caseworkers, the dimensions in which it can be understood or the ways in which it can be best facilitated and supported. Compounding this, much of the current child protection learning literature reflects an orthodox conception of learning as mentalistic and individualised, in which knowledge is typically understood as abstract theory or as technical skill or competency. This presentation outlines key findings from a PhD research study that explored learning at work for statutory child protection caseworkers from an alternative interpretation of learning using a practice theory approach. Practice theory offers an interpretation of learning as performative and grounded in situated experience. The findings of the study show that casework practice is both a mode and site of learning. The study was ethnographic in design based and followed 17 child protection caseworkers via in-depth interviews, observations and participant reflective journaling. Inductive and abductive analysis was used to organise and interpret the data and expand analysis, leading to themes. Key findings show learning to be a sociomaterial property of doing; the social ontological character of learning; and teleoaffectivity as a feature of learning. The findings contribute to theoretical and practical understandings of learning and practice in child protection, child welfare and the professional learning literature more broadly. The findings have potential to contribute to policy directions at state, territory and national levels to enhance child protection practice and systems.Keywords: adiult learning, workplace learning, child welfare, sociomaterial, practice theory
Procedia PDF Downloads 766484 Use of Concept Maps as a Tool for Evaluating Students' Understanding of Science
Authors: Aregamalage Sujeewa Vijayanthi Polgampala, Fang Huang
Abstract:
This study explores the genesis and development of concept mapping as a useful tool for science education and its effectiveness as technique for teaching and learning and evaluation for secondary science in schools and the role played by National College of Education science teachers. Concept maps, when carefully employed and executed serves as an integral part of teaching method and measure of effectiveness of teaching and tool for evaluation. Research has shown that science concept maps can have positive influence on student learning and motivation. The success of concept maps played in an instruction class depends on the type of theme selected, the development of learning outcomes, and the flexibility of instruction in providing library unit that is equipped with multimedia equipment where learners can interact. The study was restricted to 6 male and 9 female respondents' teachers in third-year internship pre service science teachers in Gampaha district Sri Lanka. Data were collected through 15 item questionnaire provided to learners and in depth interviews and class observations of 18 science classes. The two generated hypotheses for the study were rejected, while the results revealed that significant difference exists between factors influencing teachers' choice of concept maps, its usefulness and problems hindering the effectiveness of concept maps for teaching and learning process of secondary science in schools. It was examined that concept maps can be used as an effective measure to evaluate students understanding of concepts and misconceptions. Even the teacher trainees could not identify, key concept is on top, and subordinate concepts fall below. It is recommended that pre service science teacher trainees should be provided a thorough training using it as an evaluation instrument.Keywords: concept maps, evaluation, learning science, misconceptions
Procedia PDF Downloads 2746483 Building Resilience through Inclusion of Global Citizenship Education in Pre-Service Teacher Education in Pakistan
Authors: Fouzia Ajmal
Abstract:
Global Citizenship Education (GCED) could prove to be the best solution to prevent violent extremism as it will sustain a respect for all and build up a feeling of having a place with humankind. To meet the target 4.7 of sustainable development goals, it is important to focus on global citizenship education at all levels of education in general and in pre-service teacher education in particular so that the message and practices reach the young masses. The pre-service education is imperative to develop knowledge, skills and disposition of prospective teachers. The current study was conducted to investigate the integration of GCED in pre-service teacher education curriculum of Pakistan. The study was delimited to B.Ed (hons) Elementary Education programme. The curriculum of B.Ed Elementary developed by Higher Education Commission was analyzed through Curriculum Alignment Matrix. 31 course outlines were analyzed, and percentage was used to analyze the level of integration of GCED in courses. The analyses depicted that the concepts of civic sense, tolerance, duties and rights of citizens and fundamental rights of humans are partially aligned in a few of the courses. The tolerance, active citizenship, and respect for cultural diversity and religious harmony are evident in Pakistan Studies and teaching of social studies courses. The relevant books are also mentioned as resources in these courses. The intercultural understanding is not very evident while globalization is mentioned in a few courses. It is recommended that a deliberate effort may be made to integrate concepts of Global Citizenship Education so as to enable the prospective teachers in developing necessary skills to play their active role in promoting peace and building resilience to extremism in elementary school students.Keywords: curriculum analysis, global citizenship education, preservice teacher education, resilience building
Procedia PDF Downloads 149