Search results for: academic speed and accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8923

Search results for: academic speed and accuracy

6133 Wear Measurement of Thermomechanical Parameters of the Metal Carbide

Authors: Riad Harouz, Brahim Mahfoud

Abstract:

The threads and the circles on reinforced concrete are obtained by process of hot rolling with pebbles finishers in metal carbide which present a way of rolling around the outside diameter. Our observation is that this throat presents geometrical wear after the end of its cycle determined in tonnage. In our study, we have determined, in a first step, experimentally measurements of the wear in terms of thermo-mechanical parameters (Speed, Load, and Temperature) and the influence of these parameters on the wear. In the second stage, we have developed a mathematical model of lifetime useful for the prognostic of the wear and their changes.

Keywords: lifetime, metal carbides, modeling, thermo-mechanical, wear

Procedia PDF Downloads 314
6132 A Proposal to Mobile Payment Implementing 2AF+

Authors: Nael Hirzallah, Sana Nseir

Abstract:

Merchants are competing to offer the use of mobile payment to encourage shopping. many mobile payment systems were made available in various locations worldwide; however, they have various drawbacks. This paper proposes a new mobile payment system that discusses the main drawbacks of these systems, namely security and speed of transaction. The proposal is featured by being simple to use by customers and merchants. Furthermore, the proposed system depends on a new authentication factor that is introduced in this paper and called by Two-Factors Authentication Plus, (2FA+).

Keywords: electronic commerce, payment schemes, mobile payment, authentication factors, mobile applications

Procedia PDF Downloads 291
6131 Static Test Pad for Solid Rocket Motors

Authors: Svanik Garg

Abstract:

Static Test Pads are stationary mechanisms that hold a solid rocket motor, measuring the different parameters of its operation including thrust and temperature to better calibrate it for launch. This paper outlines a specific STP designed to test high powered rocket motors with a thrust upwards of 4000N and limited to 6500N. The design includes a specific portable mechanism with cost an integral part of the design process to make it accessible to small scale rocket developers with limited resources. Using curved surfaces and an ergonomic design, the STP has a delicately engineered façade/case with a focus on stability and axial calibration of thrust. This paper describes the design, operation and working of the STP and its widescale uses given the growing market of aviation enthusiasts. Simulations on the CAD model in Fusion 360 provided promising results with a safety factor of 2 established and stress limited along with the load coefficient A PCB was also designed as part of the test pad design process to help obtain results, with visual output and various virtual terminals to collect data of different parameters. The circuitry was simulated using ‘proteus’ and a special virtual interface with auditory commands was also created for accessibility and wide-scale implementation. Along with this description of the design, the paper also emphasizes the design principle behind the STP including a description of its vertical orientation to maximize thrust accuracy along with a stable base to prevent micromovements. Given the rise of students and professionals alike building high powered rockets, the STP described in this paper is an appropriate option, with limited cost, portability, accuracy, and versatility. There are two types of STP’s vertical or horizontal, the one discussed in this paper is vertical to utilize the axial component of thrust.

Keywords: static test pad, rocket motor, thrust, load, circuit, avionics, drag

Procedia PDF Downloads 392
6130 Numerical Analysis of Heat Transfer in Water Channels of the Opposed-Piston Diesel Engine

Authors: Michal Bialy, Marcin Szlachetka, Mateusz Paszko

Abstract:

This paper discusses the CFD results of heat transfer in water channels in the engine body. The research engine was a newly designed Diesel combustion engine. The engine has three cylinders with three pairs of opposed pistons inside. The engine will be able to generate 100 kW mechanical power at a crankshaft speed of 3,800-4,000 rpm. The water channels are in the engine body along the axis of the three cylinders. These channels are around the three combustion chambers. The water channels transfer combustion heat that occurs the cylinders to the external radiator. This CFD research was based on the ANSYS Fluent software and aimed to optimize the geometry of the water channels. These channels should have a maximum flow of heat from the combustion chamber or the external radiator. Based on the parallel simulation research, the boundary and initial conditions enabled us to specify average values of key parameters for our numerical analysis. Our simulation used the average momentum equations and turbulence model k-epsilon double equation. There was also used a real k-epsilon model with a function of a standard wall. The turbulence intensity factor was 10%. The working fluid mass flow rate was calculated for a single typical value, specified in line with the research into the flow rate of automotive engine cooling pumps used in engines of similar power. The research uses a series of geometric models which differ, for instance, in the shape of the cross-section of the channel along the axis of the cylinder. The results are presented as colourful distribution maps of temperature, speed fields and heat flow through the cylinder walls. Due to limitations of space, our paper presents the results on the most representative geometric model only. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: Ansys fluent, combustion engine, computational fluid dynamics CFD, cooling system

Procedia PDF Downloads 224
6129 The Role of Critical Thinking in Disease Diagnosis: A Comprehensive Review

Authors: Mohammad Al-Mousawi

Abstract:

This academic article explores the indispensable role of critical thinking in the process of diagnosing diseases. Employing a multidisciplinary approach, we delve into the cognitive skills and analytical mindset that clinicians, researchers, and healthcare professionals must employ to navigate the complexities of disease identification. By examining the integration of critical thinking within the realms of medical education, diagnostic decision-making, and technological advancements, this article aims to underscore the significance of cultivating and applying critical thinking skills in the ever-evolving landscape of healthcare.

Keywords: critical thinking, medical education, diagnostic decision-making, fostering critical thinking

Procedia PDF Downloads 78
6128 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 136
6127 Application of Argumentation for Improving the Classification Accuracy in Inductive Concept Formation

Authors: Vadim Vagin, Marina Fomina, Oleg Morosin

Abstract:

This paper contains the description of argumentation approach for the problem of inductive concept formation. It is proposed to use argumentation, based on defeasible reasoning with justification degrees, to improve the quality of classification models, obtained by generalization algorithms. The experiment’s results on both clear and noisy data are also presented.

Keywords: argumentation, justification degrees, inductive concept formation, noise, generalization

Procedia PDF Downloads 449
6126 Vibro-Tactile Equalizer for Musical Energy-Valence Categorization

Authors: Dhanya Nair, Nicholas Mirchandani

Abstract:

Musical haptic systems can enhance a listener’s musical experience while providing an alternative platform for the hearing impaired to experience music. Current music tactile technologies focus on representing tactile metronomes to synchronize performers or encoding musical notes into distinguishable (albeit distracting) tactile patterns. There is growing interest in the development of musical haptic systems to augment the auditory experience, although the haptic-music relationship is still not well understood. This paper represents a tactile music interface that provides vibrations to multiple fingertips in synchronicity with auditory music. Like an audio equalizer, different frequency bands are filtered out, and the power in each frequency band is computed and converted to a corresponding vibrational strength. These vibrations are felt on different fingertips, each corresponding to a different frequency band. Songs with music from different spectrums, as classified by their energy and valence, were used to test the effectiveness of the system and to understand the relationship between music and tactile sensations. Three participants were trained on one song categorized as sad (low energy and low valence score) and one song categorized as happy (high energy and high valence score). They were trained both with and without auditory feedback (listening to the song while experiencing the tactile music on their fingertips and then experiencing the vibrations alone without the music). The participants were then tested on three songs from both categories, without any auditory feedback, and were asked to classify the tactile vibrations they felt into either category. The participants were blinded to the songs being tested and were not provided any feedback on the accuracy of their classification. These participants were able to classify the music with 100% accuracy. Although the songs tested were on two opposite spectrums (sad/happy), the preliminary results show the potential of utilizing a vibrotactile equalizer, like the one presented, for augmenting musical experience while furthering the current understanding of music tactile relationship.

Keywords: haptic music relationship, tactile equalizer, tactile music, vibrations and mood

Procedia PDF Downloads 184
6125 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters

Authors: Dylan Santos De Pinho, Nabil Ouerhani

Abstract:

Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.

Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization

Procedia PDF Downloads 151
6124 Exploring the Use of Universal Design for Learning to Support The Deaf Learners in Lesotho Secondary Schools: English Teachers Voice

Authors: Ntloyalefu Justinah, Fumane Khanare

Abstract:

English learning has been found as one of the prevalent areas of difficulty for Deaf learners. However, studies conducted indicated that this challenge experienced by Deaf learners is an upsetting concern globally as is blamed and hampered by various reasons such as the way English is taught at schools, lack of teachers ' skills and knowledge, therefore, impact negatively on their academic performance. Despite any difficulty in English learning, this language is considered nowadays as the key tool to an educational and occupational career especially in Lesotho. This paper, therefore, intends to contribute to the existing literature by providing the views of Lesotho English teachers, which focuses on how effectively Universal design for learning can be implemented to enhance the academic performance of Deaf learners in context of the English language classroom. The purpose of this study sought to explore the use of universal design for learning (UDL) to support Deaf learners in Lesotho Secondary schools. The present study is informed by interpretative paradigm and situated within a qualitative research approach. Ten participating English teachers from two inclusive schools were purposefully selected and telephonically interviewed to generate data for this study. The data were thematically analysed. The findings indicated that even though UDL is identified as highly proficient and promotes flexibility in teaching methods teachers reflect limited knowledge of the UDL approach. The findings further showed that UDL ensures education for all learners, including marginalised groups, such as learners with disabilities through different teaching strategies. This means that the findings signify the effective use of UDL for the better performance of the English language by Deaf learners (DLs). This aligns with literature that shows mobilizing English teachers as assets help DLs to be engaged and have control in their communities by defining and solving problems using their resources and connections to other networks for asset and exchange. The study, therefore, concludes that teachers acknowledge that even though they assume to be knowledgeable about the definition of UDL, they have a limited practice of the approach, thus they need to be equipped with some techniques and skills to apply for supporting the performance of DLs by using UDL approach in their English teaching. The researchers recommend the awareness of UDL principles by the ministry of Education and Training and teachers training Universities, as well as teachers training colleges, for them to include it in their curricula so that teachers could be properly trained on how to apply it in their teaching effectively

Keywords: deaf learners, Lesotho, support learning, universal design for learning

Procedia PDF Downloads 118
6123 Nowcasting Indonesian Economy

Authors: Ferry Kurniawan

Abstract:

In this paper, we nowcast quarterly output growth in Indonesia by exploiting higher frequency data (monthly indicators) using a mixed-frequency factor model and exploiting both quarterly and monthly data. Nowcasting quarterly GDP in Indonesia is particularly relevant for the central bank of Indonesia which set the policy rate in the monthly Board of Governors Meeting; whereby one of the important step is the assessment of the current state of the economy. Thus, having an accurate and up-to-date quarterly GDP nowcast every time new monthly information becomes available would clearly be of interest for central bank of Indonesia, for example, as the initial assessment of the current state of the economy -including nowcast- will be used as input for longer term forecast. We consider a small scale mixed-frequency factor model to produce nowcasts. In particular, we specify variables as year-on-year growth rates thus the relation between quarterly and monthly data is expressed in year-on-year growth rates. To assess the performance of the model, we compare the nowcasts with two other approaches: autoregressive model –which is often difficult when forecasting output growth- and Mixed Data Sampling (MIDAS) regression. In particular, both mixed frequency factor model and MIDAS nowcasts are produced by exploiting the same set of monthly indicators. Hence, we compare the nowcasts performance of the two approaches directly. To preview the results, we find that by exploiting monthly indicators using mixed-frequency factor model and MIDAS regression we improve the nowcast accuracy over a benchmark simple autoregressive model that uses only quarterly frequency data. However, it is not clear whether the MIDAS or mixed-frequency factor model is better. Neither set of nowcasts encompasses the other; suggesting that both nowcasts are valuable in nowcasting GDP but neither is sufficient. By combining the two individual nowcasts, we find that the nowcast combination not only increases the accuracy - relative to individual nowcasts- but also lowers the risk of the worst performance of the individual nowcasts.

Keywords: nowcasting, mixed-frequency data, factor model, nowcasts combination

Procedia PDF Downloads 334
6122 Mg AZ31B Alloy Processed through ECASD

Authors: P. Fernández-Morales, D. Peláez, C. Isaza, J. M. Meza, E. Mendoza

Abstract:

Mg AZ31B alloy sheets were processed through equal-channel angular sheet drawing (ECASD) process, following the route A and C at room temperature and varying the processing speed. SEM was used to analyze the microstructure. The grain size was refined and presence of twins was observed. Vickers microhardness and tensile testing were carried out to evaluate the mechanical properties, showing in general; a remarkable increase in the first pass and slight increases during subsequent passes and, that the route C produces better uniform properties distribution through the thickness of the samples.

Keywords: ECASD, Mg Alloy, mechanical properties, microstructure

Procedia PDF Downloads 368
6121 Telemedicine App Powered by AI

Authors: Cotran Mabeya

Abstract:

This focuses on an artificially intelligent telemedicine application that aims to enrich the access to health care services, especially for those who live in remote and underserved areas. This app is highly packed with very advanced AI technologies—symptom checkers and virtual consultations—as well as health data integration for very efficient and user-friendly remote health support with main features: AI-based diagnostics, real-time health monitoring through wearables, and an intuitive interface. The Telemedicine Application tries too hard to address some of the healthcare problems, such as limited access in remote areas, high costs, lengthy wait times for certain services, as well as difficulty in getting second opinions. By making it friendlier for consultation remotely, the application removes geographic and financial barriers to accessing affordable and timely medical care. In addition, by having centralized patient records and communication between healthcare providers, it allows continuity of care by making it easier to transition to treatment. It has been confirmed that this multi-design approach incorporated both quantitative and qualitative designs to evaluate the socio-economic impacts of artificial intelligence and telemedicine on patients in Nairobi County. Adults made up the target population, while informers and respondents were categorized into patients, healthcare providers, and specialists in law, IT, and AI. Stratified and simple random sampling techniques were used to ensure diversely inclusive representation to enhance accuracy and triangulation in the data collected. Moreover, the study provides several recommendations, which include regular updating accuracy of AI symptom checkers, improving data security through encryption and multi-factor authentication, as well as real-time health data integration from bodily wearables for personal healthcare

Keywords: artificial intelligence, virtual consultations, user-friendly, remote areas

Procedia PDF Downloads 13
6120 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 535
6119 A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment

Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment.

Keywords: dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM

Procedia PDF Downloads 121
6118 An Analysis of the Relation between Need for Psychological Help and Psychological Symptoms

Authors: İsmail Ay

Abstract:

In this study, it was aimed to determine the relations between need for psychological help and psychological symptoms. The sample of the study consists of 530 university students getting educated in University of Atatürk in 2015-2016 academic years. Need for Psychological Help Scale and Brief Symptom Inventory were used to collect data in the study. In data analysis, correlation analysis and structural equation model with latent variables were used. Normality and homogeneity analyses were used to analyze the basic conditions of parametric tests. The findings obtained from the study show that as the psychological symptoms increase, need for psychological help also increases. The findings obtained through the study were approached according to the literature.

Keywords: psychological symptoms, need for psychological help, structural equation model, correlation

Procedia PDF Downloads 374
6117 Design and Simulation Interface Circuit for Piezoresistive Accelerometers with Offset Cancellation Ability

Authors: Mohsen Bagheri, Ahmad Afifi

Abstract:

This paper presents a new method for read out of the piezoresistive accelerometer sensors. The circuit works based on instrumentation amplifier and it is useful for reducing offset in Wheatstone bridge. The obtained gain is 645 with 1 μv/°c equivalent drift and 1.58 mw power consumption. A Schmitt trigger and multiplexer circuit control output node. A high speed counter is designed in this work. The proposed circuit is designed and simulated in 0.18 μm CMOS technology with 1.8 v power supply.

Keywords: piezoresistive accelerometer, zero offset, Schmitt trigger, bidirectional reversible counter

Procedia PDF Downloads 318
6116 Collaboration During Planning and Reviewing in Writing: Effects on L2 Writing

Authors: Amal Sellami, Ahlem Ammar

Abstract:

Writing is acknowledged to be a cognitively demanding and complex task. Indeed, the writing process is composed of three iterative sub-processes, namely planning, translating (writing), and reviewing. Not only do second or foreign language learners need to write according to this process, but they also need to respect the norms and rules of language and writing in the text to-be-produced. Accordingly, researchers have suggested to approach writing as a collaborative task in order to al leviate its complexity. Consequently, collaboration has been implemented during the whole writing process or only during planning orreviewing. Researchers report that implementing collaboration during the whole process might be demanding in terms of time in comparison to individual writing tasks. Consequently, because of time constraints, teachers may avoid it. For this reason, it might be pedagogically more realistic to limit collaboration to one of the writing sub-processes(i.e., planning or reviewing). However, previous research implementing collaboration in planning or reviewing is limited and fails to explore the effects of the seconditionson the written text. Consequently, the present study examines the effects of collaboration in planning and collaboration in reviewing on the written text. To reach this objective, quantitative as well as qualitative methods are deployed to examine the written texts holistically and in terms of fluency, complexity, and accuracy. Participants of the study include 4 pairs in each group (n=8). They participated in two experimental conditions, which are: (1) collaborative planning followed by individual writing and individual reviewing and (2) individual planning followed by individual writing and collaborative reviewing. The comparative research findings indicate that while collaborative planning resulted in better overall text quality (precisely better content and organization ratings), better fluency, better complexity, and fewer lexical errors, collaborative reviewing produces better accuracy and less syntactical and mechanical errors. The discussion of the findings suggests the need to conduct more comparative research in order to further explore the effects of collaboration in planning or in reviewing. Pedagogical implications of the current study include advising teachers to choose between implementing collaboration in planning or in reviewing depending on their students’ need and what they need to improve.

Keywords: collaboration, writing, collaborative planning, collaborative reviewing

Procedia PDF Downloads 103
6115 Study on the Influence of ‘Sports Module’ Teaching on High School Students’ Physical Quality

Authors: Xiaoming Zeng, Xiaozan Wang, Qinping Xu, Shaoxian Wang

Abstract:

Research Purpose: In 2017, the high school physical education and health curriculum standard advocates modular teaching. This study aims to explore the impact of ‘sports module’ teaching on the physical quality of high school students. Research methods: 800 senior high school students (400 in the experimental group and 400 in the control group) were randomly divided into two groups. The experimental group carried out modular teaching of physical education, and the control group carried out conventional teaching mode for one semester. Before and after the experiment, the physical fitness of the subjects was tested, including vital capacity, 50 meters, standing long jump, sitting forward bending. Results: After the experiment, the vital capacity (t = -4.007, p < 0.01), 50 meters (t = 2.638, p < 0.01) and standing long jump (t = -4.067, p < 0.01) of the experimental group were significantly improved. High school sports modular teaching has special characteristics. It attaches great importance to the independent development of students' personality. Students can choose their favorite modules to develop various skills and actively participate in various sports activities in the classroom. The density and intensity of sports are greatly improved. Students' speed (50m run), cardiopulmonary endurance (vital capacity), sensitivity, and strength (standing long jump) scores are greatly improved and obviously improved in nature. But at the same time, it was found that the students' sitting forward flexion did not show significant improvement, which was caused by the lack of relevant equipment in school and the students' inattention to stretching after exercise or not doing regular exercise to promote flexibility. Conclusion: (1) ‘Sports module’ teaching can effectively improve the physical quality of high school students. It is mainly manifested in cardiopulmonary function, speed, and explosive power. (2) In the future, ‘sports module’ teaching should give full play to its advantages and add courses to improve students' flexibility.

Keywords: module teaching, physical quality, senior high school student, sports

Procedia PDF Downloads 124
6114 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 345
6113 Dissolution of South African Limestone for Wet Flue Gas Desulphurization

Authors: Lawrence Koech, Ray Everson, Hein Neomagus, Hilary Rutto

Abstract:

Wet Flue gas desulphurization (FGD) systems are commonly used to remove sulphur dioxide from flue gas by contacting it with limestone in aqueous phase which is obtained by dissolution. Dissolution is important as it affects the overall performance of a wet FGD system. In the present study, effects of pH, stirring speed, solid to liquid ratio and acid concentration on the dissolution of limestone using an organic acid (adipic acid) were investigated. This was investigated using the pH stat apparatus. Calcium ions were analyzed at the end of each experiment using Atomic Absorption (AAS) machine.

Keywords: desulphurization, limestone, dissolution, pH stat apparatus

Procedia PDF Downloads 464
6112 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: carbon stock, forest inventory, LiDAR, tree count

Procedia PDF Downloads 395
6111 Impact of Pedagogical Techniques on the Teaching of Sports Sciences

Authors: Muhammad Saleem

Abstract:

Background: The teaching of sports sciences encompasses a broad spectrum of disciplines, including biomechanics, physiology, psychology, and coaching. Effective pedagogical techniques are crucial in imparting both theoretical knowledge and practical skills necessary for students to excel in the field. The impact of these techniques on students’ learning outcomes, engagement, and professional preparedness remains a vital area of study. Objective: This study aims to evaluate the effectiveness of various pedagogical techniques used in the teaching of sports sciences. It seeks to identify which methods most significantly enhance student learning, retention, engagement, and practical application of knowledge. Methods: A mixed-methods approach was employed, including both quantitative and qualitative analyses. The study involved a comparative analysis of traditional lecture-based teaching, experiential learning, problem-based learning (PBL), and technology-enhanced learning (TEL). Data were collected through surveys, interviews, and academic performance assessments from students enrolled in sports sciences programs at multiple universities. Statistical analysis was used to evaluate academic performance, while thematic analysis was applied to qualitative data to capture student experiences and perceptions. Results: The findings indicate that experiential learning and PBL significantly improve students' understanding and retention of complex sports science concepts compared to traditional lectures. TEL was found to enhance engagement and provide students with flexible learning opportunities, but its impact on deep learning varied depending on the quality of the digital resources. Overall, a combination of experiential learning, PBL, and TEL was identified as the most effective pedagogical approach, leading to higher student satisfaction and better preparedness for real-world applications. Conclusion: The study underscores the importance of adopting diverse and student-centered pedagogical techniques in the teaching of sports sciences. While traditional lectures remain useful for foundational knowledge, integrating experiential learning, PBL, and TEL can substantially improve student outcomes. These findings suggest that educators should consider a blended approach to pedagogy to maximize the effectiveness of sports science education.

Keywords: sport sciences, pedagogical techniques, health and physical education, problem-based learning, student engagement

Procedia PDF Downloads 34
6110 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 104
6109 Psychological Well-Being and Human Rights of Teenage Mothers Attending One Secondary School in the Eastern Cape, South Africa

Authors: Veliswa Nonfundo Hoho, Jabulani Gilford Kheswa

Abstract:

This paper reports on teenage motherhood and its adverse outcomes on the academic performance, emotional well-being and sexual relationships that adolescent females encounter. Drawing from Ryff’s six dimensions of psychological well-being and Bronfenbrenner’s ecological model which underpinned this study, teenage motherhood has been found to link with multiple factors such as poverty, negative self-esteem, substance abuse, cohabitation, intimate partner violence and ill-health. Furthermore, research indicates that in schools where educators fail to perform their duties as loco-parentis to motivate adolescent females learners who are mothers, absenteeism, poor academic performance and learned helplessness, are likely. The aim of this research was two-fold, namely; (i) to determine the impact of teenage motherhood on the psychological well-being of the teenage mothers and (ii) to investigate the policies which protect the human rights of teenage mothers attending secondary schools. In a qualitative study conducted in one secondary school, Fort Beaufort, Eastern Cape, South Africa, fifteen Xhosa-speaking teenage mothers, aged 15-18 years old, were interviewed. The sample was recruited by means of snow-ball sampling. To safeguard the human dignity of the respondents, informed consent, confidentiality, anonymity and privacy of the respondents were assured. For trustworthiness, this research ensured that credibility, neutrality, and transferability, are met. Following an axial and open coding of responses, five themes were identified; Health issues of teenage mothers, lack of support, violation of human rights, impaired sense of purpose in life and intimate partner-violence. From these findings, it is clear that teenage mothers lack resilience and are susceptible to contract sexually transmitted infections and HIV/AIDS because they are submissive and hopeless. Furthermore, owing to stigma that the teenage mothers' experience from family members, they resort to alcohol and drug abuse, and feel demotivated to bond with their babies. In conclusion, the recommendations are that the Health and Social Development departments collaborate to empower the psychological well-being of teenage mothers. Furthermore, school policies on discrimination should be enacted and consistently implemented.

Keywords: depression, discrimination, self-esteem, teenage mothers

Procedia PDF Downloads 278
6108 Character Development Outcomes: A Predictive Model for Behaviour Analysis in Tertiary Institutions

Authors: Rhoda N. Kayongo

Abstract:

As behavior analysts in education continue to debate on how higher institutions can continue to benefit from their social and academic related programs, higher education is facing challenges in the area of character development. This is manifested in the percentages of college completion rates, teen pregnancies, drug abuse, sexual abuse, suicide, plagiarism, lack of academic integrity, and violence among their students. Attending college is a perceived opportunity to positively influence the actions and behaviors of the next generation of society; thus colleges and universities have to provide opportunities to develop students’ values and behaviors. Prior studies were mainly conducted in private institutions and more so in developed countries. However, with the complexity of the nature of student body currently due to the changing world, a multidimensional approach combining multiple factors that enhance character development outcomes is needed to suit the changing trends. The main purpose of this study was to identify opportunities in colleges and develop a model for predicting character development outcomes. A survey questionnaire composed of 7 scales including in-classroom interaction, out-of-classroom interaction, school climate, personal lifestyle, home environment, and peer influence as independent variables and character development outcomes as the dependent variable was administered to a total of five hundred and one students of 3rd and 4th year level in selected public colleges and universities in the Philippines and Rwanda. Using structural equation modelling, a predictive model explained 57% of the variance in character development outcomes. Findings from the results of the analysis showed that in-classroom interactions have a substantial direct influence on character development outcomes of the students (r = .75, p < .05). In addition, out-of-classroom interaction, school climate, and home environment contributed to students’ character development outcomes but in an indirect way. The study concluded that in the classroom are many opportunities for teachers to teach, model and integrate character development among their students. Thus, suggestions are made to public colleges and universities to deliberately boost and implement experiences that cultivate character within the classroom. These may contribute tremendously to the students' character development outcomes and hence render effective models of behaviour analysis in higher education.

Keywords: character development, tertiary institutions, predictive model, behavior analysis

Procedia PDF Downloads 139
6107 A Comparison of Convolutional Neural Network Architectures for the Classification of Alzheimer’s Disease Patients Using MRI Scans

Authors: Tomas Premoli, Sareh Rowlands

Abstract:

In this study, we investigate the impact of various convolutional neural network (CNN) architectures on the accuracy of diagnosing Alzheimer’s disease (AD) using patient MRI scans. Alzheimer’s disease is a debilitating neurodegenerative disorder that affects millions worldwide. Early, accurate, and non-invasive diagnostic methods are required for providing optimal care and symptom management. Deep learning techniques, particularly CNNs, have shown great promise in enhancing this diagnostic process. We aim to contribute to the ongoing research in this field by comparing the effectiveness of different CNN architectures and providing insights for future studies. Our methodology involved preprocessing MRI data, implementing multiple CNN architectures, and evaluating the performance of each model. We employed intensity normalization, linear registration, and skull stripping for our preprocessing. The selected architectures included VGG, ResNet, and DenseNet models, all implemented using the Keras library. We employed transfer learning and trained models from scratch to compare their effectiveness. Our findings demonstrated significant differences in performance among the tested architectures, with DenseNet201 achieving the highest accuracy of 86.4%. Transfer learning proved to be helpful in improving model performance. We also identified potential areas for future research, such as experimenting with other architectures, optimizing hyperparameters, and employing fine-tuning strategies. By providing a comprehensive analysis of the selected CNN architectures, we offer a solid foundation for future research in Alzheimer’s disease diagnosis using deep learning techniques. Our study highlights the potential of CNNs as a valuable diagnostic tool and emphasizes the importance of ongoing research to develop more accurate and effective models.

Keywords: Alzheimer’s disease, convolutional neural networks, deep learning, medical imaging, MRI

Procedia PDF Downloads 81
6106 Study on the Self-Location Estimate by the Evolutional Triangle Similarity Matching Using Artificial Bee Colony Algorithm

Authors: Yuji Kageyama, Shin Nagata, Tatsuya Takino, Izuru Nomura, Hiroyuki Kamata

Abstract:

In previous study, technique to estimate a self-location by using a lunar image is proposed. We consider the improvement of the conventional method in consideration of FPGA implementation in this paper. Specifically, we introduce Artificial Bee Colony algorithm for reduction of search time. In addition, we use fixed point arithmetic to enable high-speed operation on FPGA.

Keywords: SLIM, Artificial Bee Colony Algorithm, location estimate, evolutional triangle similarity

Procedia PDF Downloads 522
6105 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators

Authors: K. O'Malley

Abstract:

Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.

Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university

Procedia PDF Downloads 39
6104 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 110