Search results for: students’ learning achievements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10304

Search results for: students’ learning achievements

3344 AI-Powered Personalized Teacher Training for Enhancing Language Teaching Competence

Authors: Ororho Maureen Ekpelezie

Abstract:

This study investigates language educators' perceptions and experiences regarding AI-driven personalized teacher training modules in Awka South, Anambra State, Nigeria. Utilizing a stratified random sampling technique, 25 schools across various educational levels were selected to ensure a representative sample. A total of 1000 questionnaires were distributed among language teachers in these schools, focusing on assessing their perceptions and experiences related to AI-driven personalized teacher training. With an impressive response rate of 99.1%, the study garnered valuable insights into language teachers' attitudes towards AI-driven personalized teacher training and its effectiveness in enhancing language teaching competence. The quantitative analysis revealed predominantly positive perceptions towards AI-driven personalized training modules, indicating their efficacy in addressing individual learning needs. However, challenges were identified in the long-term retention and transfer of AI-enhanced skills, underscoring the necessity for further refinement of personalized training approaches. Recommendations stemming from these findings emphasize the need for continued refinement of training methodologies and the development of tailored professional development programs to alleviate educators' concerns. Overall, this research enriches discussions on the integration of AI technology in teacher training and professional development, with the aim of bolstering language teaching competence and effectiveness in educational settings.

Keywords: language teacher training, AI-driven personalized learning, professional development, language teaching competence, personalized teacher training

Procedia PDF Downloads 17
3343 Entrepreneurship and Innovation: The Essence of Sustainable, Smart and Inclusive Economies

Authors: Isabel Martins, Orlando Pereira, Ana Martins

Abstract:

This study aims to highlight that, in changing environments, organisations need to adapt their behaviours to the demands of the new economic reality. The main purpose of this study focuses on the relationship between entrepreneurship, innovation with learning as the mediating factor. It is within this entrepreneurial spirit that literature reveals a concern with the current economic perspective towards knowledge and considers it as both the production factor par excellence and a source of entrepreneurial capacity and innovation. Entrepreneurship is a mind-set focused on identifying opportunities of economic value and translates these into the pursuit of business opportunities through innovation. It connects art and science and is a way of life, as opposed to a simple mode of business creation and profiteering. This perspective underlines the need to develop the global individual for the globalised world, the strategic key to economic and social development. The objective of this study is to explore the notion that relational capital which is established between the entrepreneur and all the other economic role players both inside and outside the organization, is indeed determinant in developing the entrepreneurial capacity. However, this depends on the organizational culture of innovation. In this context, entrepreneurship is an ‘entrepreneurial capital’ inherent in the organization that is not limited to skills needed for work. This study is a critique of extant literature review which will be also be supported by primary data collection gathered to study graduates’ perceptions towards their entrepreneurial capital. Limitations are centered on both the design and of the sample of this study. This study is of added value for both scholars and organisations in the current innovation economy.

Keywords: entrepreneurship, innovation, learning, relational capital

Procedia PDF Downloads 212
3342 A Valid Professional Development Framework For Supporting Science Teachers In Relation To Inquiry-Based Curriculum Units

Authors: Fru Vitalis Akuma, Jenna Koenen

Abstract:

The science education community is increasingly calling for learning experiences that mirror the work of scientists. Although inquiry-based science education is aligned with these calls, the implementation of this strategy is a complex and daunting task for many teachers. Thus, policymakers and researchers have noted the need for continued teacher Professional Development (PD) in the enactment of inquiry-based science education, coupled with effective ways of reaching the goals of teacher PD. This is a complex problem for which educational design research is suitable. The purpose at this stage of our design research is to develop a generic PD framework that is valid as the blueprint of a PD program for supporting science teachers in relation to inquiry-based curriculum units. The seven components of the framework are the goal, learning theory, strategy, phases, support, motivation, and an instructional model. Based on a systematic review of the literature on effective (science) teacher PD, coupled with developer screening, we have generated a design principle per component of the PD framework. For example, as per the associated design principle, the goal of the framework is to provide science teachers with experiences in authentic inquiry, coupled with enhancing their competencies linked to the adoption, customization and design; then the classroom implementation and the revision of inquiry-based curriculum units. The seven design principles have allowed us to synthesize the PD framework, which, coupled with the design principles, are the preliminary outcomes of the current research. We are in the process of evaluating the content and construct validity of the framework, based on nine one-on-one interviews with experts in inquiry-based classroom and teacher learning. To this end, we have developed an interview protocol with the input of eight such experts in South Africa and Germany. Using the protocol, the expert appraisal of the PD framework will involve three experts from Germany, South Africa, and Cameroon, respectively. These countries, where we originate and/or work, provide a variety of inquiry-based science education contexts, making the countries suitable in the evaluation of the generic PD framework. Based on the evaluation, we will revise the framework and its seven design principles to arrive at the final outcomes of the current research. While the final content and construct a valid version of the framework will serve as an example of the needed ways through which effective inquiry-based science teacher PD may be achieved, the final design principles will be useful to researchers when transforming the framework for use in any specific educational context. For example, in our further research, we will transform the framework to one that is practical and effective in supporting inquiry-based practical work in resource-constrained physical sciences classrooms in South Africa. Researchers in other educational contexts may similarly consider the final framework and design principles in their work. Thus, our final outcomes will inform practice and research around the support of teachers to increase the incorporation of learning experiences that mirror the work of scientists in a worldwide manner.

Keywords: design principles, educational design research, evaluation, inquiry-based science education, professional development framework

Procedia PDF Downloads 135
3341 Study of Objectivity, Reliability and Validity of Pedagogical Diagnostic Parameters Introduced in the Framework of a Specific Research

Authors: Emiliya Tsankova, Genoveva Zlateva, Violeta Kostadinova

Abstract:

The challenges modern education faces undoubtedly require reforms and innovations aimed at the reconceptualization of existing educational strategies, the introduction of new concepts and novel techniques and technologies related to the recasting of the aims of education and the remodeling of the content and methodology of education which would guarantee the streamlining of our education with basic European values. Aim: The aim of the current research is the development of a didactic technology for the assessment of the applicability and efficacy of game techniques in pedagogic practice calibrated to specific content and the age specificity of learners, as well as for evaluating the efficacy of such approaches for the facilitation of the acquisition of biological knowledge at a higher theoretical level. Results: In this research, we examine the objectivity, reliability and validity of two newly introduced diagnostic parameters for assessing the durability of the acquired knowledge. A pedagogic experiment has been carried out targeting the verification of the hypothesis that the introduction of game techniques in biological education leads to an increase in the quantity, quality and durability of the knowledge acquired by students. For the purposes of monitoring the effect of the application of the pedagogical technique employing game methodology on the durability of the acquired knowledge a test-base examination has been applied to students from a control group (CG) and students form an experimental group on the same content after a six-month period. The analysis is based on: 1.A study of the statistical significance of the differences of the tests for the CG and the EG, applied after a six-month period, which however is not indicative of the presence or absence of a marked effect from the applied pedagogic technique in cases when the entry levels of the two groups are different. 2.For a more reliable comparison, independently from the entry level of each group, another “indicator of efficacy of game techniques for the durability of knowledge” which has been used for the assessment of the achievement results and durability of this methodology of education. The monitoring of the studied parameters in their dynamic unfolding in different age groups of learners unquestionably reveals a positive effect of the introduction of game techniques in education in respect of durability and permanence of acquired knowledge. Methods: In the current research the following battery of methods and techniques of research for diagnostics has been employed: theoretical analysis and synthesis; an actual pedagogical experiment; questionnaire; didactic testing and mathematical and statistical methods. The data obtained have been used for the qualitative and quantitative of the results which reflect the efficacy of the applied methodology. Conclusion: The didactic model of the parameters researched in the framework of a specific study of pedagogic diagnostics is based on a general, interdisciplinary approach. Enhanced durability of the acquired knowledge proves the transition of that knowledge from short-term memory storage into long-term memory of pupils and students, which justifies the conclusion that didactic plays have beneficial effects for the betterment of learners’ cognitive skills. The innovations in teaching enhance the motivation, creativity and independent cognitive activity in the process of acquiring the material thought. The innovative methods allow for untraditional means for assessing the level of knowledge acquisition. This makes possible the timely discovery of knowledge gaps and the introduction of compensatory techniques, which in turn leads to deeper and more durable acquisition of knowledge.

Keywords: objectivity, reliability and validity of pedagogical diagnostic parameters introduced in the framework of a specific research

Procedia PDF Downloads 379
3340 New Model of Immersive Experiential Branding for International Universities

Authors: Kakhaber Djakeli

Abstract:

For market leadership, iconic brands already start to establish their unique digital avatars into Metaverse and offer Non Fungible Tokens to their fans. Metaverse can be defined as an evolutionary step of Internet development. So if companies and brands use the internet, logically, they can find new solutions for them and their customers in Metaverse. Marketing and Management today must learn how to combine physical world activities with those either entitled as digital, virtual, and immersive. A “Phygital” Solution uniting physical and digital competitive activities of the company covering the questions about how to use virtual worlds for Brand Development and Non Fungible Tokens for more attractiveness soon will be most relevant question for Branding. Thinking comprehensively, we can entitle this type of branding as an Immersive one. As we see, the Immersive Brands give customers more mesmerizing feelings than traditional ones. Accordingly, the Branding can be divided by the company in its own understanding into two models: traditional and immersive. Immersive Branding being more directed to Sensorial challenges of Humans will be big job for International Universities in near future because they target the Generation - Z. To try to help those International Universities opening the door to the mesmerizing, immersive branding, the Marketing Research have been undertaken. The main goal of the study was to establish the model for Immersive Branding at International Universities and answer on many questions what logically arises in university life. The type of Delphi Surveys entitled as an Expert Studies was undertaken for one great mission, to help International Universities to open the opportunities to Phygital activities with reliable knowledge with Model of Immersive Branding. The Questionnaire sent to Experts of Education were covering professional type of questions from education to segmentation of customers, branding, attitude to students, and knowledge to Immersive Marketing. The research results being very interesting and encouraging enough to make author to establish the New Model of Immersive Experiential Branding for International Universities.

Keywords: branding, immersive marketing, students, university

Procedia PDF Downloads 64
3339 The Estimation Method of Inter-Story Drift for Buildings Based on Evolutionary Learning

Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park

Abstract:

The seismic responses-based structural health monitoring system has been performed to reduce seismic damage. The inter-story drift ratio which is the major index of the seismic capacity assessment is employed for estimating the seismic damage of buildings. Meanwhile, seismic response analysis to estimate the structural responses of building demands significantly high computational cost due to increasing number of high-rise and large buildings. To estimate the inter-story drift ratio of buildings from the earthquake efficiently, this paper suggests the estimation method of inter-story drift for buildings using an artificial neural network (ANN). In the method, the radial basis function neural network (RBFNN) is integrated with optimization algorithm to optimize the variable through evolutionary learning that refers to evolutionary radial basis function neural network (ERBFNN). The estimation method estimates the inter-story drift without seismic response analysis when the new earthquakes are subjected to buildings. The effectiveness of the estimation method is verified through a simulation using multi-degree of freedom system.

Keywords: structural health monitoring, inter-story drift ratio, artificial neural network, radial basis function neural network, genetic algorithm

Procedia PDF Downloads 317
3338 Impact of Electric Vehicles on Energy Consumption and Environment

Authors: Amela Ajanovic, Reinhard Haas

Abstract:

Electric vehicles (EVs) are considered as an important means to cope with current environmental problems in transport. However, their high capital costs and limited driving ranges state major barriers to a broader market penetration. The core objective of this paper is to investigate the future market prospects of various types of EVs from an economic and ecological point of view. Our method of approach is based on the calculation of total cost of ownership of EVs in comparison to conventional cars and a life-cycle approach to assess the environmental benignity. The most crucial parameters in this context are km driven per year, depreciation time of the car and interest rate. The analysis of future prospects it is based on technological learning regarding investment costs of batteries. The major results are the major disadvantages of battery electric vehicles (BEVs) are the high capital costs, mainly due to the battery, and a low driving range in comparison to conventional vehicles. These problems could be reduced with plug-in hybrids (PHEV) and range extenders (REXs). However, these technologies have lower CO₂ emissions in the whole energy supply chain than conventional vehicles, but unlike BEV they are not zero-emission vehicles at the point of use. The number of km driven has a higher impact on total mobility costs than the learning rate. Hence, the use of EVs as taxis and in car-sharing leads to the best economic performance. The most popular EVs are currently full hybrid EVs. They have only slightly higher costs and similar operating ranges as conventional vehicles. But since they are dependent on fossil fuels, they can only be seen as energy efficiency measure. However, they can serve as a bridging technology, as long as BEVs and fuel cell vehicle do not gain high popularity, and together with PHEVs and REX contribute to faster technological learning and reduction in battery costs. Regarding the promotion of EVs, the best results could be reached with a combination of monetary and non-monetary incentives, as in Norway for example. The major conclusion is that to harvest the full environmental benefits of EVs a very important aspect is the introduction of CO₂-based fuel taxes. This should ensure that the electricity for EVs is generated from renewable energy sources; otherwise, total CO₂ emissions are likely higher than those of conventional cars.

Keywords: costs, mobility, policy, sustainability,

Procedia PDF Downloads 210
3337 How Technology Can Help Teachers in Reflective Practice

Authors: Ambika Perisamy, Asyriawati binte Mohd Hamzah

Abstract:

The focus of this presentation is to discuss teacher professional development (TPD) through the use of technology. TPD is necessary to prepare teachers for future challenges they will face throughout their careers and to develop new skills and good teaching practices. We will also be discussing current issues in embracing technology in the field of early childhood education and the impact on the professional development of teachers. Participants will also learn to apply teaching and learning practices through the use of technology. One major objective of this presentation is to coherently fuse practical, technology and theoretical content. The process begins by concretizing a set of preconceived ideas which need to be joined with theoretical justifications found in the literature. Technology can make observations fairer and more reliable, easier to implement, and more preferable to teachers and principals. Technology will also help principals to improve classroom observations of teachers and ultimately improve teachers’ continuous professional development. Video technology allows the early childhood teachers to record and keep the recorded video for reflection at any time. This will also provide opportunities for her to share with her principals for professional dialogues and continuous professional development plans. A total of 10 early childhood teachers and 4 principals were involved in these efforts which identified and analyze the gaps in the quality of classroom observations and its co relation to developing teachers as reflective practitioners. The methodology used involves active exploration with video technology recordings, conversations, interviews and authentic teacher child interactions which forms the key thrust in improving teaching and learning practice. A qualitative analysis of photographs, videos, transcripts which illustrates teacher’s reflections and classroom observation checklists before and after the use of video technology were adopted. Arguably, although PD support can be magnanimously strong, if teachers could not connect or create meaning out of the opportunities made available to them, they may remain passive or uninvolved. Therefore, teachers must see the value of applying new ideas such as technology and approaches to practice while creating personal meaning out of professional development. These video recordings are transferable, can be shared and edited through social media, emails and common storage between teachers and principals. To conclude the importance of reflective practice among early childhood teachers and addressing the concerns raised before and after the use of video technology, teachers and principals shared the feasibility, practical and relevance use of video technology.

Keywords: early childhood education, reflective, improve teaching and learning, technology

Procedia PDF Downloads 478
3336 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms

Authors: Habtamu Ayenew Asegie

Abstract:

Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.

Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction

Procedia PDF Downloads 23
3335 Artificial Neural Network to Predict the Optimum Performance of Air Conditioners under Environmental Conditions in Saudi Arabia

Authors: Amr Sadek, Abdelrahaman Al-Qahtany, Turkey Salem Al-Qahtany

Abstract:

In this study, a backpropagation artificial neural network (ANN) model has been used to predict the cooling and heating capacities of air conditioners (AC) under different conditions. Sufficiently large measurement results were obtained from the national energy-efficiency laboratories in Saudi Arabia and were used for the learning process of the ANN model. The parameters affecting the performance of the AC, including temperature, humidity level, specific heat enthalpy indoors and outdoors, and the air volume flow rate of indoor units, have been considered. These parameters were used as inputs for the ANN model, while the cooling and heating capacity values were set as the targets. A backpropagation ANN model with two hidden layers and one output layer could successfully correlate the input parameters with the targets. The characteristics of the ANN model including the input-processing, transfer, neurons-distance, topology, and training functions have been discussed. The performance of the ANN model was monitored over the training epochs and assessed using the mean squared error function. The model was then used to predict the performance of the AC under conditions that were not included in the measurement results. The optimum performance of the AC was also predicted under the different environmental conditions in Saudi Arabia. The uncertainty of the ANN model predictions has been evaluated taking into account the randomness of the data and lack of learning.

Keywords: artificial neural network, uncertainty of model predictions, efficiency of air conditioners, cooling and heating capacities

Procedia PDF Downloads 53
3334 Control Strategy for a Solar Vehicle Race

Authors: Francois Defay, Martim Calao, Jean Francois Dassieu, Laurent Salvetat

Abstract:

Electrical vehicles are a solution for reducing the pollution using green energy. The shell Eco-Marathon provides rules in order to minimize the battery use for the race. The use of solar panel combined with efficient motor control and race strategy allow driving a 60kg vehicle with one pilot using only the solar energy in the best case. This paper presents a complete modelization of a solar vehicle used for the shell eco-marathon. This project called Helios is cooperation between non-graduated students, academic institutes, and industrials. The prototype is an ultra-energy-efficient vehicle based on one-meter square solar panel and an own-made brushless controller to optimize the electrical part. The vehicle is equipped with sensors and embedded system to provide all the data in real time in order to evaluate the best strategy for the course. A complete modelization with Matlab/Simulink is used to test the optimal strategy to increase the global endurance. Experimental results are presented to validate the different parts of the model: mechanical, aerodynamics, electrical, solar panel. The major finding of this study is to provide solutions to identify the model parameters (Rolling Resistance Coefficient, drag coefficient, motor torque coefficient, etc.) by means of experimental results combined with identification techniques. One time the coefficients are validated, the strategy to optimize the consumption and the average speed can be tested first in simulation before to be implanted for the race. The paper describes all the simulation and experimental parts and provides results in order to optimize the global efficiency of the vehicle. This works have been started four years ago and evolved many students for the experimental and theoretical parts and allow to increase the knowledge on electrical self-efficient vehicle.

Keywords: electrical vehicle, endurance, optimization, shell eco-marathon

Procedia PDF Downloads 244
3333 Effect of Stress Relief of the Footbath Using Bio-Marker in Japan

Authors: Harumi Katayama, Mina Suzuki, Taeko Muramatsu, Yui Shimogawa, Yoshimi Mizushima, Mitsuo Hiramatsu, Kimitsugu Nakamura, Takeshi Suzue

Abstract:

Purpose: There are very often footbaths in the hot-spring area as culture from old days in Japan. This culture moderately supported mental and physical health among people. In Japanese hospitals, nurses provide footbath for severe patients to mental comfortable. However, there are only a few evidences effect of footbath for mental comfortable. In this presentation, we show the effect of stress relief of the footbath using biomarker among 35 college students in volunteer. Methods: The experiment was designed in two groups of the footbath group and the simple relaxation group randomly. As mental load, Kraepelin test was given to the students beforehand. Ultra-weak chemiluminescence (UCL) in saliva and self-administered liner scale measurable emotional state were measured on four times concurrently; there is before and after the mental load, after the stress relief, and 30 minutes after the stress relief. The scale that measured emotional state was consisted of 7 factors; there is excitement, relaxation, vigorous, fatigue, tension, calm, and sleepiness with 22 items. ANOVA was calculated effect of the footbath for stress relief. Results: The level of UCL (photons/100sec) was significantly increased in response on both groups after mental load. After the two types of stress relief, UCL (photons/100sec) of footbath group was significantly decreased compared to simple relaxation group. Score of sleepiness and relaxation were significantly increased after the stress relief in the footbath group than the simple relaxation group. However, score of excitement, vigorous, tension, and calm were exhibit the same degree of decrease after the stress relief on both group. Conclusion: It was suggested that salivary UCL may be a sensitive biomarker for mild stress relief as nursing care. In the future, we will measure using UCL to evaluate as stress relief for inpatients, outpatients, or general public as the subjects.

Keywords: bio-marker, footbath, Japan, stress relief

Procedia PDF Downloads 317
3332 Establishing Forecasts Pointing Towards the Hungarian Energy Change Based on the Results of Local Municipal Renewable Energy Production and Energy Export

Authors: Balazs Kulcsar

Abstract:

Professional energy organizations perform analyses mainly on the global and national levels about the expected development of the share of renewables in electric power generation, heating, and cooling, as well as the transport sectors. There are just a few publications, research institutions, non-profit organizations, and national initiatives with a focus on studies in the individual towns, settlements. Issues concerning the self-supply of energy on the settlement level have not become too wide-spread. The goal of our energy geographic studies is to determine the share of local renewable energy sources in the settlement-based electricity supply across Hungary. The Hungarian energy supply system defines four categories based on the installed capacities of electric power generating units. From these categories, the theoretical annual electricity production of small-sized household power plants (SSHPP) featuring installed capacities under 50 kW and small power plants with under 0.5 MW capacities have been taken into consideration. In the above-mentioned power plant categories, the Hungarian Electricity Act has allowed the establishment of power plants primarily for the utilization of renewable energy sources since 2008. Though with certain restrictions, these small power plants utilizing renewable energies have the closest links to individual settlements and can be regarded as the achievements of the host settlements in the shift of energy use. Based on the 2017 data, we have ranked settlements to reflect the level of self-sufficiency in electricity production from renewable energy sources. The results show that the supply of all the energy demanded by settlements from local renewables is within reach now in small settlements, e.g., in the form of the small power plant categories discussed in the study, and is not at all impossible even in small towns and cities. In Hungary, 30 settlements produce more renewable electricity than their own annual electricity consumption. If these overproductive settlements export their excess electricity towards neighboring settlements, then full electricity supply can be realized on further 29 settlements from renewable sources by local small power plants. These results provide an opportunity for governmental planning of the realization of energy shift (legislative background, support system, environmental education), as well as framing developmental forecasts and scenarios until 2030.

Keywords: energy geography, Hungary, local small power plants, renewable energy sources, self-sufficiency settlements

Procedia PDF Downloads 138
3331 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities

Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun

Abstract:

As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.

Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning

Procedia PDF Downloads 39
3330 Deep Feature Augmentation with Generative Adversarial Networks for Class Imbalance Learning in Medical Images

Authors: Rongbo Shen, Jianhua Yao, Kezhou Yan, Kuan Tian, Cheng Jiang, Ke Zhou

Abstract:

This study proposes a generative adversarial networks (GAN) framework to perform synthetic sampling in feature space, i.e., feature augmentation, to address the class imbalance problem in medical image analysis. A feature extraction network is first trained to convert images into feature space. Then the GAN framework incorporates adversarial learning to train a feature generator for the minority class through playing a minimax game with a discriminator. The feature generator then generates features for minority class from arbitrary latent distributions to balance the data between the majority class and the minority class. Additionally, a data cleaning technique, i.e., Tomek link, is employed to clean up undesirable conflicting features introduced from the feature augmentation and thus establish well-defined class clusters for the training. The experiment section evaluates the proposed method on two medical image analysis tasks, i.e., mass classification on mammogram and cancer metastasis classification on histopathological images. Experimental results suggest that the proposed method obtains superior or comparable performance over the state-of-the-art counterparts. Compared to all counterparts, our proposed method improves more than 1.5 percentage of accuracy.

Keywords: class imbalance, synthetic sampling, feature augmentation, generative adversarial networks, data cleaning

Procedia PDF Downloads 113
3329 Intercultural and Inclusive Teaching Competency Implementation within a Canadian Polytechnic's Academic Model: A Pre- and Post-Assessment Analysis

Authors: Selinda England, Ben Bodnaryk

Abstract:

With an unprecedented increase in provincial immigration and government support for greater international and culturally diverse learners, a trade/applied learning-focused polytechnic with four campuses within one Canadian province saw the need for intercultural awareness and an intercultural teaching competence strategy for faculty training. An institution-wide pre-assessment needs survey was conducted in 2018, in which 87% of faculty professed to have some/no training when working with international and/or culturally diverse learners. After researching fellow Polytechnics in Canada and seeing very little in the way of faculty support for intercultural competence, an institutional project team comprised of members from all facets of the Polytechnic was created and included: Indigenous experts, Academic Chairs, Directors, Human Resource Managers, and international/settlement subject matter experts. The project team was organized to develop and implement a new academic model focused on enriching intercultural competence among faculty. Utilizing a competency based model, the project team incorporated inclusive terminology into competency indicators and devised a four-phase proposal for implementing intercultural teacher training: a series of workshops focused on the needs of international and culturally diverse learners, including teaching strategies based on current TESOL methodologies, literature and online resources for quick access when planning lessons, faculty assessment examples and models of interculturally proficient instructors, and future job descriptions - all which promote and encourage development of specific intercultural skills. Results from a post-assessment survey (to be conducted in Spring 2020) and caveats regarding improvements and next steps will be shared. The project team believes its intercultural and inclusive teaching competency-based model is one of the first, institution-wide faculty supported initiatives within the Canadian college and Polytechnic post-secondary educational environment; it aims to become a leader in both the province and nation regarding intercultural competency training for trades, industry, and business minded community colleges and applied learning institutions.

Keywords: cultural diversity and education, diversity training teacher training, teaching and learning, teacher training

Procedia PDF Downloads 98
3328 Selecting Graduates for the Interns’ Award by Using Multisource Feedback Process: Does It Work?

Authors: Kathryn Strachan, Sameer Otoom, Amal AL-Gallaf, Ahmed Al Ansari

Abstract:

Introduction: Introducing a reliable method to select graduates for an award in higher education can be challenging but is not impossible. Multisource feedback (MSF) is a popular assessment tool that relies on evaluations of different groups of people, including physicians and non-physicians. It is useful for assessing several domains, including professionalism, communication and collaboration and may be useful for selecting the best interns to receive a University award. Methods: 16 graduates responded to an invitation to participate in the student award, which was conducted by the Royal College of Surgeons of Ireland-Bahrain Medical University of Bahrain (RCSI Bahrain) using the MSF process. Five individuals from the following categories rated each participant: physicians, nurses, and fellow students. RCSI Bahrain graduates were assessed in the following domains; professionalism, communication, and collaboration. Mean and standard deviation were calculated and the award was given to the graduate who scored the highest among his/her colleagues. Cronbach’s coefficient was used to determine the questionnaire’s internal consistency and reliability. Factor analysis was conducted to examine for the construct validity. Results: 16 graduates participated in the RCSI-Bahrain interns’ award based on the MSF process, giving us a 16.5% response rate. The instrument was found to be suitable for factor analysis and showed 3 factor solutions representing 79.3% of the total variance. Reliability analysis using Cronbach’s α reliability of internal consistency indicated that the full scale of the instrument had high internal consistency (Cronbach’s α 0.98). Conclusion: This study found the MSF process to be reliable and valid for selecting the best graduates for the interns’ awards. However, the low response rates may suggest that the process is not feasible for allowing the majority of the students to participate in the selection process. Further research studies may be required to support the feasibility of the MSF process in selecting graduates for the university award.

Keywords: MSF, RCSI, validity, Bahrain

Procedia PDF Downloads 326
3327 Investigating the Neural Heterogeneity of Developmental Dyscalculia

Authors: Fengjuan Wang, Azilawati Jamaludin

Abstract:

Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.

Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity

Procedia PDF Downloads 41
3326 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 97
3325 Applying Artificial Neural Networks to Predict Speed Skater Impact Concussion Risk

Authors: Yilin Liao, Hewen Li, Paula McConvey

Abstract:

Speed skaters often face a risk of concussion when they fall on the ice floor and impact crash mats during practices and competitive races. Several variables, including those related to the skater, the crash mat, and the impact position (body side/head/feet impact), are believed to influence the severity of the skater's concussion. While computer simulation modeling can be employed to analyze these accidents, the simulation process is time-consuming and does not provide rapid information for coaches and teams to assess the skater's injury risk in competitive events. This research paper promotes the exploration of the feasibility of using AI techniques for evaluating skater’s potential concussion severity, and to develop a fast concussion prediction tool using artificial neural networks to reduce the risk of treatment delays for injured skaters. The primary data is collected through virtual tests and physical experiments designed to simulate skater-mat impact. It is then analyzed to identify patterns and correlations; finally, it is used to train and fine-tune the artificial neural networks for accurate prediction. The development of the prediction tool by employing machine learning strategies contributes to the application of AI methods in sports science and has theoretical involvements for using AI techniques in predicting and preventing sports-related injuries.

Keywords: artificial neural networks, concussion, machine learning, impact, speed skater

Procedia PDF Downloads 78
3324 Comparing the Contribution of General Vocabulary Knowledge and Academic Vocabulary Knowledge to Learners' Academic Achievement

Authors: Reem Alsager, James Milton

Abstract:

Coxhead’s (2000) Academic Word List (AWL) believed to be essential for students pursuing higher education and helps differentiate English for Academic Purposes (EAP) from General English as a course of study, and it is thought to be important for comprehending English academic texts. It has been described that AWL is an infrequent, discrete set of vocabulary items unreachable from general language. On the other hand, it has been known for a period of time that general vocabulary knowledge is a good predictor of academic achievement. This study, however, is an attempt to measure and compare the contribution of academic knowledge and general vocabulary knowledge to learners’ GPA and examine what knowledge is a better predictor of academic achievement and investigate whether AWL as a specialised list of infrequent words relates to the frequency effect. The participants were comprised of 44 international postgraduate students in Swansea University, all from the School of Management, following the taught MSc (Master of Science). The study employed the Academic Vocabulary Size Test (AVST) and the XK_Lex vocabulary size test. The findings indicate that AWL is a list based on word frequency rather than a discrete and unique word list and that the AWL performs the same function as general vocabulary, with tests of each found to measure largely the same quality of knowledge. The findings also suggest that the contribution that AWL knowledge provides for academic success is not sufficient and that general vocabulary knowledge is better in predicting academic achievement. Furthermore, the contribution that academic knowledge added above the contribution of general vocabulary knowledge when combined is really small and noteworthy. This study’s results are in line with the argument and suggest that it is the development of general vocabulary size is an essential quality for academic success and acquiring the words of the AWL will form part of this process. The AWL by itself does not provide sufficient coverage, and is probably not specialised enough, for knowledge of this list to influence this general process. It can be concluded that AWL as an academic word list epitomizes only a fraction of words that are actually needed for academic success in English and that knowledge of academic vocabulary combined with general vocabulary knowledge above the most frequent 3000 words is what matters most to ultimate academic success.

Keywords: academic achievement, academic vocabulary, general vocabulary, vocabulary size

Procedia PDF Downloads 206
3323 Using Autoencoder as Feature Extractor for Malware Detection

Authors: Umm-E-Hani, Faiza Babar, Hanif Durad

Abstract:

Malware-detecting approaches suffer many limitations, due to which all anti-malware solutions have failed to be reliable enough for detecting zero-day malware. Signature-based solutions depend upon the signatures that can be generated only when malware surfaces at least once in the cyber world. Another approach that works by detecting the anomalies caused in the environment can easily be defeated by diligently and intelligently written malware. Solutions that have been trained to observe the behavior for detecting malicious files have failed to cater to the malware capable of detecting the sandboxed or protected environment. Machine learning and deep learning-based approaches greatly suffer in training their models with either an imbalanced dataset or an inadequate number of samples. AI-based anti-malware solutions that have been trained with enough samples targeted a selected feature vector, thus ignoring the input of leftover features in the maliciousness of malware just to cope with the lack of underlying hardware processing power. Our research focuses on producing an anti-malware solution for detecting malicious PE files by circumventing the earlier-mentioned shortcomings. Our proposed framework, which is based on automated feature engineering through autoencoders, trains the model over a fairly large dataset. It focuses on the visual patterns of malware samples to automatically extract the meaningful part of the visual pattern. Our experiment has successfully produced a state-of-the-art accuracy of 99.54 % over test data.

Keywords: malware, auto encoders, automated feature engineering, classification

Procedia PDF Downloads 58
3322 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification

Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro

Abstract:

Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.

Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification

Procedia PDF Downloads 98
3321 Biographical Learning and Its Impact on the Democratization Processes of Post War Societies

Authors: Rudolf Egger

Abstract:

This article shows some results of an ongoing project in Kosova. This project deals with the meaning of social transformation processes in the life-courses of Kosova people. One goal is to create an oral history archive in this country. In the last seven years we did some interpretative work (using narrative interviews) concerning the experiences and meanings of social changes from the perspective of life course. We want to reconstruct the individual possibilities in creating one's life in new social structures. After the terrible massacres of ethnical-territorially defined nationalism in former Yugoslavia it is the main focus to find out something about the many small daily steps which must be done, to build up a kind of “normality” in this country. These steps can be very well reconstructed by narrations, by life stories, because personal experiences are naturally linked with social orders. Each individual story is connected with further stories, in which the collective history will be negotiated and reflected. The view on the biographical narration opens the possibility to analyze the concreteness of the “individual case” in the complexity of collective history. Life stories contain thereby a kind of a transition character, that’s why they can be used for the reconstruction of periods of political transformation. For example: In the individual story we can find very clear the national or mythological character of the Albanian people in Kosova. The shown narrations can be read also as narrative lines in relation to the (re-)interpretation of the past, in which lived life is fixed into history in the so-called collective memory in Kosova.

Keywords: biographical learning, adult education, social change, post war societies

Procedia PDF Downloads 402
3320 A Conundrum of Teachability and Learnability of Deaf Adult English as Second Language Learners in Pakistani Mainstream Classrooms: Integration or Elimination

Authors: Amnah Moghees, Saima Abbas Dar, Muniba Saeed

Abstract:

Teaching a second language to deaf learners has always been a challenge in Pakistan. Different approaches and strategies have been followed, but they have been resulted into partial or complete failure. The study aims to investigate the language problems faced by adult deaf learners of English as second language in mainstream classrooms. Moreover, the study also determines the factors which are very much involved in language teaching and learning in mainstream classes. To investigate the language problems, data will be collected through writing samples of ten deaf adult learners and ten normal ESL learners of the same class; whereas, observation in inclusive language teaching classrooms and interviews from five ESL teachers in inclusive classes will be conducted to know the factors which are directly or indirectly involved in inclusive language education. Keeping in view this study, qualitative research paradigm will be applied to analyse the corpus. The study figures out that deaf ESL learners face severe language issues such as; odd sentence structures, subject and verb agreement violation, misappropriation of verb forms and tenses as compared to normal ESL learners. The study also predicts that in mainstream classrooms there are multiple factors which are affecting the smoothness of teaching and learning procedure; role of mediator, level of deaf learners, empathy of normal learners towards deaf learners and language teacher’s training.

Keywords: deaf English language learner, empathy, mainstream classrooms, previous language knowledge of learners, role of mediator, language teachers' training

Procedia PDF Downloads 149
3319 A Socio-Cultural Approach to Implementing Inclusive Education in South Africa

Authors: Louis Botha

Abstract:

Since the presentation of South Africa’s inclusive education strategy in Education White Paper 6 in 2001, very little has been accomplished in terms of its implementation. The failure to achieve the goals set by this policy document is related to teachers lacking confidence and knowledge about how to enact inclusive education, as well as challenges of inflexible curricula, limited resources in overcrowded classrooms, and so forth. This paper presents a socio-cultural approach to addressing these challenges of implementing inclusive education in the South African context. It takes its departure from the view that inclusive education has been adequately theorized and conceptualized in terms of its philosophical and ethical principles, especially in South African policy and debates. What is missing, however, are carefully theorized, practically implementable research interventions which can address the concerns mentioned above. Drawing on socio-cultural principles of learning and development and on cultural-historical activity theory (CHAT) in particular, this paper argues for the use of formative interventions which introduce appropriately constructed mediational artifacts that have the potential to initiate inclusive practices and pedagogies within South African schools and classrooms. It makes use of Vygotsky’s concept of double stimulation to show how the proposed artifacts could instigate forms of transformative agency which promote the adoption of inclusive cultures of learning and teaching.

Keywords: cultural-historical activity theory, double stimulation, formative interventions, transformative agency

Procedia PDF Downloads 211
3318 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 115
3317 Study of Evaluation Model Based on Information System Success Model and Flow Theory Using Web-scale Discovery System

Authors: June-Jei Kuo, Yi-Chuan Hsieh

Abstract:

Because of the rapid growth of information technology, more and more libraries introduce the new information retrieval systems to enhance the users’ experience, improve the retrieval efficiency, and increase the applicability of the library resources. Nevertheless, few of them are discussed the usability from the users’ aspect. The aims of this study are to understand that the scenario of the information retrieval system utilization, and to know why users are willing to continuously use the web-scale discovery system to improve the web-scale discovery system and promote their use of university libraries. Besides of questionnaires, observations and interviews, this study employs both Information System Success Model introduced by DeLone and McLean in 2003 and the flow theory to evaluate the system quality, information quality, service quality, use, user satisfaction, flow, and continuing to use web-scale discovery system of students from National Chung Hsing University. Then, the results are analyzed through descriptive statistics and structural equation modeling using AMOS. The results reveal that in web-scale discovery system, the user’s evaluation of system quality, information quality, and service quality is positively related to the use and satisfaction; however, the service quality only affects user satisfaction. User satisfaction and the flow show a significant impact on continuing to use. Moreover, user satisfaction has a significant impact on user flow. According to the results of this study, to maintain the stability of the information retrieval system, to improve the information content quality, and to enhance the relationship between subject librarians and students are recommended for the academic libraries. Meanwhile, to improve the system user interface, to minimize layer from system-level, to strengthen the data accuracy and relevance, to modify the sorting criteria of the data, and to support the auto-correct function are required for system provider. Finally, to establish better communication with librariana commended for all users.

Keywords: web-scale discovery system, discovery system, information system success model, flow theory, academic library

Procedia PDF Downloads 85
3316 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection

Authors: Ali Hamza

Abstract:

Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.

Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network

Procedia PDF Downloads 65
3315 Transformation of the Institutionality of International Cooperation in Ecuador from 2007 to 2017: 2017: A Case of State Identity Affirmation through Role Performance

Authors: Natalia Carolina Encalada Castillo

Abstract:

As part of an intended radical policy change compared to former administrations in Ecuador, the transformation of the institutionality of international cooperation during the period of President Rafael Correa was considered as a key element for the construction of the state of 'Good Living'. This intention led to several regulatory changes in the reception of cooperation for development, and even the departure of some foreign cooperation agencies. Moreover, Ecuador launched the initiative to become a donor of cooperation towards other developing countries through the ‘South-South Cooperation’ approach. All these changes were institutionalized through the Ecuadorian System of International Cooperation as a new framework to establish rules and policies that guarantee a sovereign management of foreign aid. Therefore, this research project has been guided by two questions: What were the factors that motivated the transformation of the institutionality of international cooperation in Ecuador from 2007 to 2017? and, what were the implications of this transformation in terms of the international role of the country? This paper seeks to answer these questions through Role Theory within a Constructivist meta-theoretical perspective, considering that in this case, changes at the institutional level in the field of cooperation, responded not only to material motivations but also to interests built on the basis of a specific state identity. The latter was only possible to affirm through specific roles such as ‘sovereign recipient of cooperation’ as well as ‘donor of international cooperation’. However, the performance of these roles was problematic as they were not easily accepted by the other actors in the international arena or in the domestic level. In terms of methodology, these dynamics are analyzed in a qualitative way mainly through interpretive analysis of the discourse of high-level decision-makers from Ecuador and other cooperation actors. Complementary to this, document-based research of relevant information as well as interviews have been conducted. Finally, it is concluded that even if material factors such as infrastructure needs, trade and investment interests, as well as reinforcement of state control and monitoring of cooperation flows, motivated the institutional transformation of international cooperation in Ecuador; the essential basis of these changes was the search for a new identity for the country to be projected in the international arena. This identity started to be built but continues to be unstable. Therefore, it is important to potentiate the achievements of the new international cooperation policies, and review their weaknesses, so that non-reimbursable cooperation funds received as well as ‘South-South cooperation’ actions, contribute effectively to national objectives.

Keywords: Ecuador, international cooperation, Role Theory, state identity

Procedia PDF Downloads 187