Search results for: deep learning methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21438

Search results for: deep learning methods

16548 Inclusive Education in South African Universities: Pre-Service Teachers’ Experiences

Authors: Cina Mosito, Toyin Mary Adewumi, Charlene Nissen

Abstract:

One of the goals of inclusive education is to provide learners with suitable learning environments and prospects to best attain their potential. This study sought to determine the experiences of studying inclusive education on pre-service teachers’ teaching within the South African education context. A purposeful sample comprising 6 pre-service teachers was selected from a university of technology located in the Western Cape South Africa. Data were collected using open-ended questionnaires, which were exploratory in nature and analyzed thematically. The findings supported significant proportions of experiences as self-reported by pre-service teachers. The pre-service teachers’ experiences of studying inclusive education included inclusive education as an “eye-opener” to the fact that learners experiencing various barriers to learning can be accommodated in the regular classrooms, exposure to some aspects of inclusive education, such as diversity, learners’ rights, and curriculum differentiation. It was also revealed that studying inclusive education made pre-service teachers love and enjoy teaching more. The study shows that awareness of inclusive education has influenced pre-service teachers in South African schools.

Keywords: experience, inclusive education, pre-service teacher, South Africa

Procedia PDF Downloads 203
16547 Introducing Data-Driven Learning into Chinese Higher Education English for Academic Purposes Writing Instructional Settings

Authors: Jingwen Ou

Abstract:

Writing for academic purposes in a second or foreign language is one of the most important and the most demanding skills to be mastered by non-native speakers. Traditionally, the EAP writing instruction at the tertiary level encompasses the teaching of academic genre knowledge, more specifically, the disciplinary writing conventions, the rhetorical functions, and specific linguistic features. However, one of the main sources of challenges in English academic writing for L2 students at the tertiary level can still be found in proficiency in academic discourse, especially vocabulary, academic register, and organization. Data-Driven Learning (DDL) is defined as “a pedagogical approach featuring direct learner engagement with corpus data”. In the past two decades, the rising popularity of the application of the data-driven learning (DDL) approach in the field of EAP writing teaching has been noticed. Such a combination has not only transformed traditional pedagogy aided by published DDL guidebooks in classroom use but also triggered global research on corpus use in EAP classrooms. This study endeavors to delineate a systematic review of research in the intersection of DDL and EAP writing instruction by conducting a systematic literature review on both indirect and direct DDL practice in EAP writing instructional settings in China. Furthermore, the review provides a synthesis of significant discoveries emanating from prior research investigations concerning Chinese university students’ perception of Data-Driven Learning (DDL) and the subsequent impact on their academic writing performance following corpus-based training. Research papers were selected from Scopus-indexed journals and core journals from two main Chinese academic databases (CNKI and Wanfang) published in both English and Chinese over the last ten years based on keyword searches. Results indicated an insufficiency of empirical DDL research despite a noticeable upward trend in corpus research on discourse analysis and indirect corpus applications for material design by language teachers. Research on the direct use of corpora and corpus tools in DDL, particularly in combination with genre-based EAP teaching, remains a relatively small fraction of the whole body of research in Chinese higher education settings. Such scarcity is highly related to the prevailing absence of systematic training in English academic writing registers within most Chinese universities' EAP syllabi due to the Chinese English Medium Instruction policy, where only English major students are mandated to submit English dissertations. Findings also revealed that Chinese learners still held mixed attitudes towards corpus tools influenced by learner differences, limited access to language corpora, and insufficient pre-training on corpus theoretical concepts, despite their improvements in final academic writing performance.

Keywords: corpus linguistics, data-driven learning, EAP, tertiary education in China

Procedia PDF Downloads 46
16546 Select-Low and Select-High Methods for the Wheeled Robot Dynamic States Control

Authors: Bogusław Schreyer

Abstract:

The paper enquires on the two methods of the wheeled robot braking torque control. Those two methods are applied when the adhesion coefficient under left side wheels is different from the adhesion coefficient under the right side wheels. In case of the select-low (SL) method the braking torque on both wheels is controlled by the signals originating from the wheels on the side of the lower adhesion. In the select-high (SH) method the torque is controlled by the signals originating from the wheels on the side of the higher adhesion. The SL method is securing stable and secure robot behaviors during the braking process. However, the efficiency of this method is relatively low. The SH method is more efficient in terms of time and braking distance but in some situations may cause wheels blocking. It is important to monitor the velocity of all wheels and then take a decision about the braking torque distribution accordingly. In case of the SH method the braking torque slope may require significant decrease in order to avoid wheel blocking.

Keywords: select-high, select-low, torque distribution, wheeled robots

Procedia PDF Downloads 117
16545 Image Inpainting Model with Small-Sample Size Based on Generative Adversary Network and Genetic Algorithm

Authors: Jiawen Wang, Qijun Chen

Abstract:

The performance of most machine-learning methods for image inpainting depends on the quantity and quality of the training samples. However, it is very expensive or even impossible to obtain a great number of training samples in many scenarios. In this paper, an image inpainting model based on a generative adversary network (GAN) is constructed for the cases when the number of training samples is small. Firstly, a feature extraction network (F-net) is incorporated into the GAN network to utilize the available information of the inpainting image. The weighted sum of the extracted feature and the random noise acts as the input to the generative network (G-net). The proposed network can be trained well even when the sample size is very small. Secondly, in the phase of the completion for each damaged image, a genetic algorithm is designed to search an optimized noise input for G-net; based on this optimized input, the parameters of the G-net and F-net are further learned (Once the completion for a certain damaged image ends, the parameters restore to its original values obtained in the training phase) to generate an image patch that not only can fill the missing part of the damaged image smoothly but also has visual semantics.

Keywords: image inpainting, generative adversary nets, genetic algorithm, small-sample size

Procedia PDF Downloads 125
16544 Spectral Analysis Applied to Variables of Oil Wells Profiling

Authors: Suzana Leitão Russo, Mayara Laysa de Oliveira Silva, José Augusto Andrade Filho, Vitor Hugo Simon

Abstract:

Currently, seismic methods and prospecting methods are commonly applied in the oil industry and, according to the information reported every day; oil is a source of non-renewable energy. It is easier to understand why the ownership of areas of oil extraction is coveted by many nations. It is necessary to think about ways that will enable the maximization of oil production. The technique of spectral analysis can be used to analyze the behavior of the variables already defined in oil well the profile. The main objective is to verify the series dependence of variables, and to model the variables using the frequency domain to observe the model residuals.

Keywords: oil, well, spectral analysis, oil extraction

Procedia PDF Downloads 527
16543 A Generative Adversarial Framework for Bounding Confounded Causal Effects

Authors: Yaowei Hu, Yongkai Wu, Lu Zhang, Xintao Wu

Abstract:

Causal inference from observational data is receiving wide applications in many fields. However, unidentifiable situations, where causal effects cannot be uniquely computed from observational data, pose critical barriers to applying causal inference to complicated real applications. In this paper, we develop a bounding method for estimating the average causal effect (ACE) under unidentifiable situations due to hidden confounders. We propose to parameterize the unknown exogenous random variables and structural equations of a causal model using neural networks and implicit generative models. Then, with an adversarial learning framework, we search the parameter space to explicitly traverse causal models that agree with the given observational distribution and find those that minimize or maximize the ACE to obtain its lower and upper bounds. The proposed method does not make any assumption about the data generating process and the type of the variables. Experiments using both synthetic and real-world datasets show the effectiveness of the method.

Keywords: average causal effect, hidden confounding, bound estimation, generative adversarial learning

Procedia PDF Downloads 183
16542 An Intelligent Thermal-Aware Task Scheduler in Multiprocessor System on a Chip

Authors: Sina Saadati

Abstract:

Multiprocessors Systems-On-Chips (MPSOCs) are used widely on modern computers to execute sophisticated software and applications. These systems include different processors for distinct aims. Most of the proposed task schedulers attempt to improve energy consumption. In some schedulers, the processor's temperature is considered to increase the system's reliability and performance. In this research, we have proposed a new method for thermal-aware task scheduling which is based on an artificial neural network (ANN). This method enables us to consider a variety of factors in the scheduling process. Some factors like ambient temperature, season (which is important for some embedded systems), speed of the processor, computing type of tasks and have a complex relationship with the final temperature of the system. This Issue can be solved using a machine learning algorithm. Another point is that our solution makes the system intelligent So that It can be adaptive. We have also shown that the computational complexity of the proposed method is cheap. As a consequence, It is also suitable for battery-powered systems.

Keywords: task scheduling, MOSOC, artificial neural network, machine learning, architecture of computers, artificial intelligence

Procedia PDF Downloads 99
16541 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 63
16540 Teaching English for Specific Purposes to Business Students through Social Media

Authors: Candela Contero Urgal

Abstract:

Using realia to teach English for Specific Purposes (ESP) is a must, as it is thought to be designed to meet the students’ real needs in their professional life. Teachers are then expected to offer authentic materials and set students in authentic contexts where their learning outcomes can be highly meaningful. One way of engaging students is using social networks as a way to bridge the gap between their everyday life and their ESP learning outcomes. It is in ESP, particularly in Business English teaching, that our study focuses, as the ongoing process of digitalization is leading firms to use social media to communicate with potential clients. The present paper is aimed at carrying out a case study in which different digital tools are employed as a way to offer a collection of formats businesses are currently using so as to internationalize and advertise their products and services. A secondary objective of our study will then be to progress on the development of multidisciplinary competencies students are to acquire during their degree. A two-phased study will be presented. The first phase will cover the analysis of course tasks accomplished by undergraduate students at the University of Cadiz (Spain) in their third year of the Degree in Business Management and Administration by comparing the results obtained during the years 2019 to 2021. The second part of our study will present a survey conducted to these students in 2021 and 2022 so as to verify their interest in learning new ways to digitalize as well as internationalize their future businesses. Findings will confirm students’ interest in working with updated realia in their Business English lessons, as a consequence of their strong belief in the necessity to have authentic contexts and didactic resources. Despite the limitations social media can have as a means to teach business English, students will still find it highly beneficial since it will foster their familiarisation with the digital tools they will need to use when they get to the labour market.

Keywords: English for specific purposes, business English, internationalization of higher education, foreign language teaching

Procedia PDF Downloads 106
16539 Debate, Discontent and National Identity in a Secular State

Authors: Man Bahadur Shahu

Abstract:

The secularism is a controversial, debatable and misinterpreted issue since its endorsement in the 2007 constitution in Nepal. The unprecedented acts have been seen favoring and disfavoring against the secularism within the public domain—which creates the fallacies and suspicions in the rationalization and modernization process. This paper highlights three important points: first, the secularization suddenly ruptures the silence and institutional decline of religion within the state. Second, state effort on secularism simultaneously fosters the state neutrality and state separation from religious institutions that amplify the recognition of all religious groups through the equal treatment in their festivity, rituals, and practices. Third, no state would completely secular because of their deep-rooted mindset and disposition with their own religious faiths and beliefs that largely enhance intergroup conflict, dispute, riot and turbulence in post-secular period in the name of proselytizing and conversion.

Keywords: conflict, proselytizing, religion, secular

Procedia PDF Downloads 148
16538 Performance Comparison of Wideband Covariance Matrix Sparse Representation (W-CMSR) with Other Wideband DOA Estimation Methods

Authors: Sandeep Santosh, O. P. Sahu

Abstract:

In this paper, performance comparison of wideband covariance matrix sparse representation (W-CMSR) method with other existing wideband Direction of Arrival (DOA) estimation methods has been made.W-CMSR relies less on a priori information of the incident signal number than the ordinary subspace based methods.Consider the perturbation free covariance matrix of the wideband array output. The diagonal covariance elements are contaminated by unknown noise variance. The covariance matrix of array output is conjugate symmetric i.e its upper right triangular elements can be represented by lower left triangular ones.As the main diagonal elements are contaminated by unknown noise variance,slide over them and align the lower left triangular elements column by column to obtain a measurement vector.Simulation results for W-CMSR are compared with simulation results of other wideband DOA estimation methods like Coherent signal subspace method (CSSM), Capon, l1-SVD, and JLZA-DOA. W-CMSR separate two signals very clearly and CSSM, Capon, L1-SVD and JLZA-DOA fail to separate two signals clearly and an amount of pseudo peaks exist in the spectrum of L1-SVD.

Keywords: W-CMSR, wideband direction of arrival (DOA), covariance matrix, electrical and computer engineering

Procedia PDF Downloads 466
16537 Modeling and Simulation of Underwater Flexible Manipulator as Raleigh Beam Using Bond Graph

Authors: Sumit Kumar, Sunil Kumar, Chandan Deep Singh

Abstract:

This paper presents modeling and simulation of flexible robot in an underwater environment. The underwater environment completely contrasts with ground or space environment. The robot in an underwater situation is subjected to various dynamic forces like buoyancy forces, hydrostatic and hydrodynamic forces. The underwater robot is modeled as Rayleigh beam. The developed model further allows estimating the deflection of tip in two directions. The complete dynamics of the underwater robot is analyzed, which is the main focus of this investigation. The control of robot trajectory is not discussed in this paper. Simulation is performed using Symbol Shakti software.

Keywords: bond graph modeling, dynamics. modeling, rayleigh beam, underwater robot

Procedia PDF Downloads 580
16536 Effect of Scalping on the Mechanical Behavior of Coarse Soils

Authors: Nadine Ali Hassan, Ngoc Son Nguyen, Didier Marot, Fateh Bendahmane

Abstract:

This paper aims at presenting a study of the effect of scalping methods on the mechanical properties of coarse soils by resorting to numerical simulations based on the discrete element method (DEM) and experimental triaxial tests. Two reconstitution methods are used, designated as scalping method and substitution method. Triaxial compression tests are first simulated on a granular materials with a grap graded particle size distribution by using the DEM. We study the effect of these reconstitution methods on the stress-strain behavior of coarse soils with different fine contents and with different ways to control the densities of the scalped and substituted materials. Experimental triaxial tests are performed on original mixtures of sands and gravels with different fine contents and on their corresponding scalped and substituted samples. Numerical results are qualitatively compared to experimental ones. Agreements and discrepancies between these results are also discussed.

Keywords: coarse soils, mechanical behavior, scalping, replacement, triaxial devices

Procedia PDF Downloads 200
16535 Preparation of Regional Input-Output Table for Fars Province in 2011: GRIT1Method

Authors: Maryam Akbarzadeh, F. Esmaeilzadeh, A. Poostvar, M. Manuchehri

Abstract:

Preparation of regional input-output tables requires statistical methods combined with high costs and too much time. Obtained estimates by non-statistical methods have low confidence coefficient. Therefore, integrated methods for this purpose are suggested by recent input–output studies. In this study, first GRIT method is introduced as an appropriate integrated method for preparation of input-output table of Fars province. Next, input-output table is prepared for Fars province using this method. Therefore, this study is based on input-output table of national economy in 2001. Necessary modifications performed in the field of changes at level of prices and differences of regional trade compared with other areas at national level. Moreover, up to date statistics and information and technical experts view on the various economic sectors along with input-output table 33 was used in 2011 followed by investigation of general structure of the province economy based on the amounts of added value obtained from this table.

Keywords: grit, input-output, table, regional

Procedia PDF Downloads 253
16534 Empirical Study From Final Exams of Graduate Courses in Computer Science to Demystify the Notion of an Average Software Engineer and Offer a Direction to Address Diversity of Professional Backgrounds of a Student Body

Authors: Alex Elentukh

Abstract:

The paper is based on data collected from final exams administered during five years of teaching the graduate course in software engineering. The visualization instrument with four distinct personas has been used to improve the effectiveness of each class. The study offers a plethora of clues toward students' behavioral preferences. Diversity among students (professional background, physical proximity) is too significant to assume a single face of a learner. This is particularly true for a body of online graduate students in computer science. Conclusions of the study (each learner is unique, and each class is unique) are extrapolated to demystify the notion of an 'average software engineer.' An immediate direction for an educator is to ensure a course applies to a wide audience of very different individuals. On the other hand, a student should be clear about his/her abilities and preferences - to follow the most effective learning path.

Keywords: K.3.2 computer and information science education, learner profiling, adaptive learning, software engineering

Procedia PDF Downloads 95
16533 Articulating Competencies Confidently: Employability in the Curriculum

Authors: Chris Procter

Abstract:

There is a significant debate on the role of University education in developing or teaching employability skills. Should higher education attempt to do this? Is it the best place? Is it able to do so? Different views abound, but the question is wrongly posed – one of the reasons that previous employability initiatives foundered (e.g., in the UK). Our role is less to teach than to guide, less to develop and more to help articulate: “the mind is not a vessel to be filled, but a fire to be lit” (Plutarch). This paper then addresses how this can be achieved taking into account criticism of employability initiatives as well as relevant learning theory. It discusses the experience of a large module which involved students being assessed on all stages of application for a live job description together with reflection on their professional development. The assessment itself adopted a Patchwork Text approach as a vehicle for learning. Students were guided to evaluate their strengths and areas to be developed, articulate their competencies, and reflect upon their development, moving on to new Thresholds of Employability. The paper uses the student voices to express the progress they made. It concludes that employability can and should be an effective part of the higher education curriculum when designed to encourage students to confidently articulate their competencies and take charge of their own professional development.

Keywords: competencies, employability, patchwork assessment, threshold concepts

Procedia PDF Downloads 211
16532 Electrochemical Corrosion and Mechanical Properties of Structural Materials for Oil and Gas Applications in Simulated Deep-Sea Well Environments

Authors: Turin Datta, Kisor K. Sahu

Abstract:

Structural materials used in today’s oil and gas exploration and drilling of both onshore and offshore oil and gas wells must possess superior tensile properties, excellent resistance to corrosive degradation that includes general, localized (pitting and crevice) and environment assisted cracking such as stress corrosion cracking and hydrogen embrittlement. The High Pressure and High Temperature (HPHT) wells are typically operated at temperature and pressure that can exceed 300-3500F and 10,000psi (69MPa) respectively which necessitates the use of exotic materials in these exotic sources of natural resources. This research investigation is focussed on the evaluation of tensile properties and corrosion behavior of AISI 4140 High-Strength Low Alloy Steel (HSLA) possessing tempered martensitic microstructure and Duplex 2205 Stainless Steel (DSS) having austenitic and ferritic phase. The selection of this two alloys are primarily based on economic considerations as 4140 HSLA is cheaper when compared to DSS 2205. Due to the harsh aggressive chemical species encountered in deep oil and gas wells like chloride ions (Cl-), carbon dioxide (CO2), hydrogen sulphide (H2S) along with other mineral organic acids, DSS 2205, having a dual-phase microstructure can mitigate the degradation resulting from the presence of both chloride ions (Cl-) and hydrogen simultaneously. Tensile properties evaluation indicates a ductile failure of DSS 2205 whereas 4140 HSLA exhibit quasi-cleavage fracture due to the phenomenon of ‘tempered martensitic embrittlement’. From the potentiodynamic polarization testing, it is observed that DSS 2205 has higher corrosion resistance than 4140 HSLA; the former exhibits passivity signifying resistance to localized corrosion while the latter exhibits active dissolution in all the environmental parameters space that was tested. From the Scanning Electron Microscopy (SEM) evaluation, it is understood that stable pits appear in DSS 2205 only when the temperature exceeds the critical pitting temperature (CPT). SEM observation of the corroded 4140 HSLA specimen tested in aqueous 3.5 wt.% NaCl solution reveals intergranular cracking which appears due to the adsorption and diffusion of hydrogen during polarization, thus, causing hydrogen-induced cracking/hydrogen embrittlement. General corrosion testing of DSS 2205 in acidic brine (pH~3.0) solution at ambient temperature using coupons indicate no weight loss even after three months whereas the corrosion rate of AISI 4140 HSLA is significantly higher after one month of testing.

Keywords: DSS 2205, polarization, pitting, SEM

Procedia PDF Downloads 263
16531 Exploratory Study of the Influencing Factors for Hotels' Competitors

Authors: Asma Ameur, Dhafer Malouche

Abstract:

Hotel competitiveness research is an essential phase of the marketing strategy for any hotel. Certainly, knowing the hotels' competitors helps the hotelier to grasp its position in the market and the citizen to make the right choice in picking a hotel. Thus, competitiveness is an important indicator that can be influenced by various factors. In fact, the issue of competitiveness, this ability to cope with competition, remains a difficult and complex concept to define and to exploit. Therefore, the purpose of this article is to make an exploratory study to calculate a competitiveness indicator for hotels. Further on, this paper makes it possible to determine the criteria of direct or indirect effect on the image and the perception of a hotel. The actual research is used to look into the right model for hotel ‘competitiveness. For this reason, we exploit different theoretical contributions in the field of machine learning. Thus, we use some statistical techniques such as the Principal Component Analysis (PCA) to reduce the dimensions, as well as other techniques of statistical modeling. This paper presents a survey covering of the techniques and methods in hotel competitiveness research. Furthermore, this study allows us to deduct the significant variables that influence the determination of hotel’s competitors. Lastly, the discussed experiences in this article found that the hotel competitors are influenced by several factors with different rates.

Keywords: competitiveness, e-reputation, hotels' competitors, online hotel’ review, principal component analysis, statistical modeling

Procedia PDF Downloads 114
16530 The Influence of Covariance Hankel Matrix Dimension on Algorithms for VARMA Models

Authors: Celina Pestano-Gabino, Concepcion Gonzalez-Concepcion, M. Candelaria Gil-Fariña

Abstract:

Some estimation methods for VARMA models, and Multivariate Time Series Models in general, rely on the use of a Hankel matrix. It is known that if the data sample is populous enough and the dimension of the Hankel matrix is unnecessarily large, this may result in an unnecessary number of computations as well as in numerical problems. In this sense, the aim of this paper is two-fold. First, we provide some theoretical results for these matrices which translate into a lower dimension for the matrices normally used in the algorithms. This contribution thus serves to improve those methods from a numerical and, presumably, statistical point of view. Second, we have chosen an estimation algorithm to illustrate in practice our improvements. The results we obtained in a simulation of VARMA models show that an increase in the size of the Hankel matrix beyond the theoretical bound proposed as valid does not necessarily lead to improved practical results. Therefore, for future research, we propose conducting similar studies using any of the linear system estimation methods that depend on Hankel matrices.

Keywords: covariances Hankel matrices, Kronecker indices, system identification, VARMA models

Procedia PDF Downloads 240
16529 Corpora in Secondary Schools Training Courses for English as a Foreign Language Teachers

Authors: Francesca Perri

Abstract:

This paper describes a proposal for a teachers’ training course, focused on the introduction of corpora in the EFL didactics (English as a foreign language) of some Italian secondary schools. The training course is conceived as a part of a TEDD participant’s five months internship. TEDD (Technologies for Education: diversity and devices) is an advanced course held by the Department of Engineering and Information Technology at the University of Trento, Italy. Its main aim is to train a selected, heterogeneous group of graduates to engage with the complex interdependence between education and technology in modern society. The educational approach draws on a plural coexistence of various theories as well as socio-constructivism, constructionism, project-based learning and connectivism. TEDD educational model stands as the main reference source to the design of a formative course for EFL teachers, drawing on the digitalization of didactics and creation of learning interactive materials for L2 intermediate students. The training course lasts ten hours, organized into five sessions. In the first part (first and second session) a series of guided and semi-guided activities drive participants to familiarize with corpora through the use of a digital tools kit. Then, during the second part, participants are specifically involved in the realization of a ML (Mistakes Laboratory) where they create, develop and share digital activities according to their teaching goals with the use of corpora, supported by the digital facilitator. The training course takes place into an ICT laboratory where the teachers work either individually or in pairs, with a computer connected to a wi-fi connection, while the digital facilitator shares inputs, materials and digital assistance simultaneously on a whiteboard and on a digital platform where participants interact and work together both synchronically and diachronically. The adoption of good ICT practices is a fundamental step to promote the introduction and use of Corpus Linguistics in EFL teaching and learning processes, in fact dealing with corpora not only promotes L2 learners’ critical thinking and orienteering versus wild browsing when they are looking for ready-made translations or language usage samples, but it also entails becoming confident with digital tools and activities. The paper will explain reasons, limits and resources of the pedagogical approach adopted to engage EFL teachers with the use of corpora in their didactics through the promotion of digital practices.

Keywords: digital didactics, education, language learning, teacher training

Procedia PDF Downloads 147
16528 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector

Authors: Mariam Vardiashvili

Abstract:

The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity.  When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and  Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector  as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating  impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.

Keywords: cash-generating assets, non-cash-generating assets, recoverable (usable restorative) value, value of use

Procedia PDF Downloads 136
16527 Teacher’s Perception of Dalcroze Method Course as Teacher’s Enhancement Course: A Case Study in Hong Kong

Authors: Ka Lei Au

Abstract:

The Dalcroze method has been emerging in music classrooms, and music teachers are encouraged to integrate music and movement in their teaching. Music programs in colleges in Hong Kong have been introducing method courses such as Orff and Dalcroze method in music teaching as teacher’s education program. Since the targeted students of the course are music teachers who are making the decision of what approach to use in their classroom, their perception is significantly valued to identify how this approach is applicable in their teaching in regards to the teaching and learning culture and environment. This qualitative study aims to explore how the Dalcroze method as a teacher’s education course is perceived by music teachers from three aspects: 1) application in music teaching, 2) self-enhancement, 3) expectation. Through the lens of music teachers, data were collected from 30 music teachers who are taking the Dalcroze method course in music teaching in Hong Kong by the survey. The findings reveal the value and their intention of the Dalcroze method in Hong Kong. It also provides a significant reference for better development of such courses in the future in adaption to the culture, teaching and learning environment and teacher’s, student’s and parent’s perception of this approach.

Keywords: Dalcroze method, music teaching, perception, self-enhancement, teacher’s education

Procedia PDF Downloads 401
16526 Hydro-Meteorological Vulnerability and Planning in Urban Area: The Case of Yaoundé City in Cameroon

Authors: Ouabo Emmanuel Romaric, Amougou Armathe

Abstract:

Background and aim: The study of impacts of floods and landslides at a small scale, specifically in the urban areas of developing countries is done to provide tools and actors for a better management of risks in such areas, which are now being affected by climate change. The main objective of this study is to assess the hydrometeorological vulnerabilities associated with flooding and urban landslides to propose adaptation measures. Methods: Climatic data analyses were done by calculation of indices of climate change within 50 years (1960-2012). Analyses of field data to determine causes, the level of risk and its consequences on the area of study was carried out using SPSS 18 software. The cartographic analysis and GIS were used to refine the work in space. Then, spatial and terrain analyses were carried out to determine the morphology of field in relation with floods and landslide, and the diffusion on the field. Results: The interannual changes in precipitation has highlighted the surplus years (21), the deficit years (24) and normal years (7). Barakat method bring out evolution of precipitation by jerks and jumps. Floods and landslides are correlated to high precipitation during surplus and normal years. Data field analyses show that populations are conscious (78%) of the risks with 74% of them exposed, but their capacities of adaptation is very low (51%). Floods are the main risk. The soils are classed as feralitic (80%), hydromorphic (15%) and raw mineral (5%). Slope variation (5% to 15%) of small hills and deep valley with anarchic construction favor flood and landslide during heavy precipitation. Mismanagement of waste produce blocks free circulation of river and accentuate floods. Conclusion: Vulnerability of population to hydrometeorological risks in Yaoundé VI is the combination of variation of parameters like precipitation, temperature due to climate change, and the bad planning of construction in urban areas. Because of lack of channels for water to circulate due to saturation of soils, the increase of heavy precipitation and mismanagement of waste, the result are floods and landslides which causes many damages on goods and people.

Keywords: climate change, floods, hydrometeorological, vulnerability

Procedia PDF Downloads 462
16525 Developing an Accurate AI Algorithm for Histopathologic Cancer Detection

Authors: Leah Ning

Abstract:

This paper discusses the development of a machine learning algorithm that accurately detects metastatic breast cancer (cancer has spread elsewhere from its origin part) in selected images that come from pathology scans of lymph node sections. Being able to develop an accurate artificial intelligence (AI) algorithm would help significantly in breast cancer diagnosis since manual examination of lymph node scans is both tedious and oftentimes highly subjective. The usage of AI in the diagnosis process provides a much more straightforward, reliable, and efficient method for medical professionals and would enable faster diagnosis and, therefore, more immediate treatment. The overall approach used was to train a convolution neural network (CNN) based on a set of pathology scan data and use the trained model to binarily classify if a new scan were benign or malignant, outputting a 0 or a 1, respectively. The final model’s prediction accuracy is very high, with 100% for the train set and over 70% for the test set. Being able to have such high accuracy using an AI model is monumental in regard to medical pathology and cancer detection. Having AI as a new tool capable of quick detection will significantly help medical professionals and patients suffering from cancer.

Keywords: breast cancer detection, AI, machine learning, algorithm

Procedia PDF Downloads 87
16524 Physical Interaction Mappings: Utilizing Cognitive Load Theory in Order to Enhance Physical Product Interaction

Authors: Bryan Young, Andrew Wodehouse, Marion Sheridan

Abstract:

The availability of working memory has long been identified as a critical aspect of an instructional design. Many conventional instructional procedures impose irrelevant or unrelated cognitive loads on the learner due to the fact that they were created without contemplation, or understanding, of cognitive work load. Learning to physically operate traditional products can be viewed as a learning process akin to any other. As such, many of today's products, such as cars, boats, and planes, which have traditional controls that predate modern user-centered design techniques may be imposing irrelevant or unrelated cognitive loads on their operators. The goal of the research was to investigate the fundamental relationships between physical inputs, resulting actions, and learnability. The results showed that individuals can quickly adapt to input/output reversals across dimensions, however, individuals struggle to cope with the input/output when the dimensions are rotated due to the resulting increase in cognitive load.

Keywords: cognitive load theory, instructional design, physical product interactions, usability design

Procedia PDF Downloads 532
16523 Half Dose Tissue Plasminogen Activator for Intermediate-Risk Pulmonary Embolism

Authors: Macie Matta, Ahmad Jabri, Stephanie Jackson

Abstract:

Introduction: In the absence of hypotension, pulmonary embolism (PE) causing right ventricular dysfunction or strain, whether confirmed by imaging or cardiac biomarkers, is deemed to be an intermediate-risk category. Urgent treatment of intermediate-risk PE can prevent progression to hemodynamic instability and death. Management options include thrombolysis, thrombectomy, or systemic anticoagulation. We aim to evaluate the short-term outcomes of a half-dose tissue plasminogen activator (tPA) for the management of intermediate-risk PE. Methods: We retrospectively identified adult patients diagnosed with intermediate-risk PE between the years 2000 and 2021. Demographic data, lab values, imaging, treatment choice, and outcomes were all obtained through chart review. Primary outcomes measured include major bleeding events and in-hospital mortality. Patients on standard systemic anticoagulation without receiving thrombolysis or thrombectomy served as controls. Patient data were analyzed using SAS®️ Software (version 9.4; Cary, NC) to compare individuals that received half-dose tPA with controls, and statistical significance was set at a p-value of 0.05. Results: We included 57 patients in our final analysis, with 19 receiving tPA. Patient characteristics and comorbidities were comparable between both groups. There was a significant difference between PE location, presence of acute deep vein thrombosis, and peak troponin level between both groups. The thrombolytic cohort was more likely to demonstrate a 60/60 sign and thrombus in transit finding on echocardiography than controls. The thrombolytic group was more likely to have major bleeding (17% vs 7.9%, p= 0.4) and in-hospital mortality (5.3% vs 0%, p=0.3); however, this was not statistically significant. Patients who received half-dose tPA had non-significantly higher rates of major bleeding and in-hospital mortality. Larger scale, randomized control trials are needed to establish the benefit and safety of thrombolytics in patients with intermediate-risk PE.

Keywords: pulmonary embolism, half dose thrombolysis, tissue plasminogen activator, cardiac biomarkers, echocardiographic findings, major bleeding event

Procedia PDF Downloads 73
16522 Free Fibular Flaps in Management of Sternal Dehiscence

Authors: H. N. Alyaseen, S. E. Alalawi, T. Cordoba, É. Delisle, C. Cordoba, A. Odobescu

Abstract:

Sternal dehiscence is defined as the persistent separation of sternal bones that are often complicated with mediastinitis. Etiologies that lead to sternal dehiscence vary, with cardiovascular and thoracic surgeries being the most common. Early diagnosis in susceptible patients is crucial to the management of such cases, as they are associated with high mortality rates. A recent meta-analysis of more than four hundred thousand patients concluded that deep sternal wound infections were the leading cause of mortality and morbidity in patients undergoing cardiac procedures. Long-term complications associated with sternal dehiscence include increased hospitalizations, cardiac infarctions, and renal and respiratory failures. Numerous osteosynthesis methods have been described in the literature. Surgical materials offer enough rigidity to support the sternum and can be flexible enough to allow physiological breathing movements of the chest; however, these materials fall short when managing patients with extensive bone loss, osteopenia, or general poor bone quality, for such cases, flaps offer a better closure system. Early utilization of flaps yields better survival rates compared to delayed closure or to patients treated with sternal rewiring and closed drainage. The utilization of pectoralis major flaps, rectus abdominus, and latissimus muscle flaps have all been described in the literature as great alternatives. Flap selection depends on a variety of factors, mainly the size of the sternal defect, infection, and the availability of local tissues. Free fibular flaps are commonly harvested flaps utilized in reconstruction around the body. In cases regarding sternal reconstruction with free fibular flaps, the literature exclusively discussed the flap applied vertically to the chest wall. We present a different technique applying the free fibular triple barrel flap oriented in a transverse manner, in parallel to the ribs. In our experience, this method could have enhanced results and improved prognosis as it contributes to the normal circumferential shape of the chest wall.

Keywords: sternal dehiscence, management, free fibular flaps, novel surgical techniques

Procedia PDF Downloads 90
16521 Learners’ Perceptions of Tertiary Level Teachers’ Code Switching: A Vietnamese Perspective

Authors: Hoa Pham

Abstract:

The literature on language teaching and second language acquisition has been largely driven by monolingual ideology with a common assumption that a second language (L2) is best taught and learned in the L2 only. The current study challenges this assumption by reporting learners' positive perceptions of tertiary level teachers' code switching practices in Vietnam. The findings of this study contribute to our understanding of code switching practices in language classrooms from a learners' perspective. Data were collected from student participants who were working towards a Bachelor degree in English within the English for Business Communication stream through the use of focus group interviews. The literature has documented that this method of interviewing has a number of distinct advantages over individual student interviews. For instance, group interactions generated by focus groups create a more natural environment than that of an individual interview because they include a range of communicative processes in which each individual may influence or be influenced by others - as they are in their real life. The process of interaction provides the opportunity to obtain the meanings and answers to a problem that are "socially constructed rather than individually created" leading to the capture of real-life data. The distinct feature of group interaction offered by this technique makes it a powerful means of obtaining deeper and richer data than those from individual interviews. The data generated through this study were analysed using a constant comparative approach. Overall, the students expressed positive views of this practice indicating that it is a useful teaching strategy. Teacher code switching was seen as a learning resource and a source supporting language output. This practice was perceived to promote student comprehension and to aid the learning of content and target language knowledge. This practice was also believed to scaffold the students' language production in different contexts. However, the students indicated their preference for teacher code switching to be constrained, as extensive use was believed to negatively impact on their L2 learning and trigger cognitive reliance on the L1 for L2 learning. The students also perceived that when the L1 was used to a great extent, their ability to develop as autonomous learners was negatively impacted. This study found that teacher code switching was supported in certain contexts by learners, thus suggesting that there is a need for the widespread assumption about the monolingual teaching approach to be re-considered.

Keywords: codeswitching, L1 use, L2 teaching, learners’ perception

Procedia PDF Downloads 311
16520 GRCNN: Graph Recognition Convolutional Neural Network for Synthesizing Programs from Flow Charts

Authors: Lin Cheng, Zijiang Yang

Abstract:

Program synthesis is the task to automatically generate programs based on user specification. In this paper, we present a framework that synthesizes programs from flow charts that serve as accurate and intuitive specification. In order doing so, we propose a deep neural network called GRCNN that recognizes graph structure from its image. GRCNN is trained end-to-end, which can predict edge and node information of the flow chart simultaneously. Experiments show that the accuracy rate to synthesize a program is 66.4%, and the accuracy rates to recognize edge and node are 94.1% and 67.9%, respectively. On average, it takes about 60 milliseconds to synthesize a program.

Keywords: program synthesis, flow chart, specification, graph recognition, CNN

Procedia PDF Downloads 116
16519 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 128