Search results for: supervised machine learning algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11237

Search results for: supervised machine learning algorithm

8447 A Study on the Implementation of Differentiating Instruction Based on Universal Design for Learning

Authors: Yong Wook Kim

Abstract:

The diversity of students in regular classrooms is increasing due to expand inclusive education and increase multicultural students in South Korea. In this diverse classroom environment, the universal design for learning (UDL) has been proposed as a way to meet both the educational need and social expectation of student achievement. UDL offers a variety of practical teaching methods, one of which is a differentiating instruction. The differentiating instruction has been pointed out resource limitation, organizational resistance, and lacks easy-to-implement framework. However, through the framework provided by the UDL, differentiating instruction is able to be flexible in their implementation. In practice, the UDL and differentiating instruction are complementary, but there is still a lack of research that suggests specific implementation methods that apply both concepts at the same time. This study was conducted to investigate the effects of differentiating instruction strategies according to learner characteristics (readiness, interest, learning profile), components of differentiating instruction (content, process, performance, learning environment), especially UDL principles (representation, behavior and expression, participation) existed in differentiating instruction, and implementation of UDL-based differentiating instruction through the Planning for All Learner (PAL) and UDL Lesson Plan Cycle. It is meaningful that such a series of studies can enhance the possibility of more concrete and realistic UDL-based teaching and learning strategies in the classroom, especially in inclusive settings.

Keywords: universal design for learning, differentiating instruction, UDL lesson plan, PAL

Procedia PDF Downloads 182
8446 Fixed Point of Lipschitz Quasi Nonexpansive Mappings

Authors: Maryam Moosavi, Hadi Khatibzadeh

Abstract:

The main purpose of this paper is to study the proximal point algorithm for quasi-nonexpansive mappings in Hadamard spaces. △-convergence and strong convergence of cyclic resolvents for a finite family of quasi-nonexpansive mappings one to a fixed point of the mappings are established

Keywords: Fixed point, Hadamard space, Proximal point algorithm, Quasi-nonexpansive sequence of mappings, Resolvent

Procedia PDF Downloads 74
8445 Affective (And Effective) Teaching and Learning: Higher Education Gets Social Again

Authors: Laura Zizka, Gaby Probst

Abstract:

The Covid-19 pandemic has affected the way Higher Education Institutions (HEIs) have given their courses. From emergency remote where all students and faculty were immediately confined to home teaching and learning, the continuing evolving sanitary situation obliged HEIs to adopt other methods of teaching and learning from blended courses that included both synchronous and asynchronous courses and activities to hy-flex models where some students were on campus while others followed the course simultaneously online. Each semester brought new challenges for HEIs and, subsequently, additional emotional reactions. This paper investigates the affective side of teaching and learning in various online modalities and its toll on students and faculty members over the past three semesters. The findings confirm that students and faculty who have more self-efficacy, flexibility, and resilience reported positive emotions and embraced the opportunities that these past semesters have offered. While HEIs have begun a new semester in an attempt to return to ‘normal’ face-to-face courses, this paper posits that there are lessons to be learned from these past three semesters. The opportunities that arose from the challenge of the pandemic should be considered when moving forward by focusing on a greater emphasis on the affective aspect of teaching and learning in HEIs worldwide.

Keywords: effective teaching and learning, higher education, engagement, interaction, motivation

Procedia PDF Downloads 102
8444 Performance Analysis of Proprietary and Non-Proprietary Tools for Regression Testing Using Genetic Algorithm

Authors: K. Hema Shankari, R. Thirumalaiselvi, N. V. Balasubramanian

Abstract:

The present paper addresses to the research in the area of regression testing with emphasis on automated tools as well as prioritization of test cases. The uniqueness of regression testing and its cyclic nature is pointed out. The difference in approach between industry, with business model as basis, and academia, with focus on data mining, is highlighted. Test Metrics are discussed as a prelude to our formula for prioritization; a case study is further discussed to illustrate this methodology. An industrial case study is also described in the paper, where the number of test cases is so large that they have to be grouped as Test Suites. In such situations, a genetic algorithm proposed by us can be used to reconfigure these Test Suites in each cycle of regression testing. The comparison is made between a proprietary tool and an open source tool using the above-mentioned metrics. Our approach is clarified through several tables.

Keywords: APFD metric, genetic algorithm, regression testing, RFT tool, test case prioritization, selenium tool

Procedia PDF Downloads 415
8443 Polarimetric Synthetic Aperture Radar Data Classification Using Support Vector Machine and Mahalanobis Distance

Authors: Najoua El Hajjaji El Idrissi, Necip Gokhan Kasapoglu

Abstract:

Polarimetric Synthetic Aperture Radar-based imaging is a powerful technique used for earth observation and classification of surfaces. Forest evolution has been one of the vital areas of attention for the remote sensing experts. The information about forest areas can be achieved by remote sensing, whether by using active radars or optical instruments. However, due to several weather constraints, such as cloud cover, limited information can be recovered using optical data and for that reason, Polarimetric Synthetic Aperture Radar (PolSAR) is used as a powerful tool for forestry inventory. In this [14paper, we applied support vector machine (SVM) and Mahalanobis distance to the fully polarimetric AIRSAR P, L, C-bands data from the Nezer forest areas, the classification is based in the separation of different tree ages. The classification results were evaluated and the results show that the SVM performs better than the Mahalanobis distance and SVM achieves approximately 75% accuracy. This result proves that SVM classification can be used as a useful method to evaluate fully polarimetric SAR data with sufficient value of accuracy.

Keywords: classification, synthetic aperture radar, SAR polarimetry, support vector machine, mahalanobis distance

Procedia PDF Downloads 118
8442 Development of a Small-Group Teaching Method for Enhancing the Learning of Basic Acupuncture Manipulation Optimized with the Theory of Motor Learning

Authors: Wen-Chao Tang, Tang-Yi Liu, Ming Gao, Gang Xu, Hua-Yuan Yang

Abstract:

This study developed a method for teaching acupuncture manipulation in small groups optimized with the theory of motor learning. Sixty acupuncture students and their teacher participated in our research. Motion videos were recorded of their manipulations using the lifting-thrusting method. These videos were analyzed using Simi Motion software to acquire the movement parameters of the thumb tip. The parameter velocity curves along Y axis was used to generate small teaching groups clustered by a self-organized map (SOM) and K-means. Ten groups were generated. All the targeted instruction based on the comparative results groups as well as the videos of teacher and student was provided to the members of each group respectively. According to the theory and research of motor learning, the factors or technologies such as video instruction, observational learning, external focus and summary feedback were integrated into this teaching method. Such efforts were desired to improve and enhance the effectiveness of current acupuncture teaching methods in limited classroom teaching time and extracurricular training.

Keywords: acupuncture, group teaching, video instruction, observational learning, external focus, summary feedback

Procedia PDF Downloads 161
8441 Data Structure Learning Platform to Aid in Higher Education IT Courses (DSLEP)

Authors: Estevan B. Costa, Armando M. Toda, Marcell A. A. Mesquita, Jacques D. Brancher

Abstract:

The advances in technology in the last five years allowed an improvement in the educational area, as the increasing in the development of educational software. One of the techniques that emerged in this lapse is called Gamification, which is the utilization of video game mechanics outside its bounds. Recent studies involving this technique provided positive results in the application of these concepts in many areas as marketing, health and education. In the last area there are studies that cover from elementary to higher education, with many variations to adequate to the educators methodologies. Among higher education, focusing on IT courses, data structures are an important subject taught in many of these courses, as they are base for many systems. Based on the exposed this paper exposes the development of an interactive web learning environment, called DSLEP (Data Structure Learning Platform), to aid students in higher education IT courses. The system includes basic concepts seen on this subject such as stacks, queues, lists, arrays, trees and was implemented to ease the insertion of new structures. It was also implemented with gamification concepts, such as points, levels, and leader boards, to engage students in the search for knowledge and stimulate self-learning.

Keywords: gamification, Interactive learning environment, data structures, e-learning

Procedia PDF Downloads 477
8440 Guidelines for the Development of Community Classroom for Research and Academic Services in Ranong Province

Authors: Jenjira Chinnawong, Phusit Phukamchanoad

Abstract:

The objective of this study is to explore the guidelines for the development of community classroom for research and academic services in Ranong province. By interviewing leaders involved in the development of learning resources, research, and community services, it was found that the leaders' perceptions in the development of learning resources, research, and community services in Ranong, was at the highest level. They perceived at every step on policies of community classroom implementation, research, and community services in Ranong. Leaders' perceptions were at the moderate level in terms of analysis of problems related to procedures of community classroom management, research and community services in Ranong especially in the planning and implementation of the examination, improvement, and development of learning sources to be in good condition and ready to serve the visitors. Their participation in the development of community classroom, research, and community services in Ranong was at a high level, particularly in the participation in monitoring and evaluation of the development of learning resources as well as in reporting on the result of the development of learning resources. The most important thing in the development of community classroom, research and community services in Ranong is the necessity to integrate the three principles of knowledge building in teaching, research and academic services in order to create the identity of the local and community classroom for those who are interested to visit to learn more about the useful knowledge. As a result, community classroom, research, and community services were well-known both inside and outside the university.

Keywords: community classroom, learning resources, development, participation

Procedia PDF Downloads 140
8439 Multimedia Design in Tactical Play Learning and Acquisition for Elite Gaelic Football Practitioners

Authors: Michael McMahon

Abstract:

The use of media (video/animation/graphics) has long been used by athletes, coaches, and sports scientists to analyse and improve performance in technical skills and team tactics. Sports educators are increasingly open to the use of technology to support coach and learner development. However, an overreliance is a concern., This paper is part of a larger Ph.D. study looking into these new challenges for Sports Educators. Most notably, how to exploit the deep-learning potential of Digital Media among expert learners, how to instruct sports educators to create effective media content that fosters deep learning, and finally, how to make the process manageable and cost-effective. Central to the study is Richard Mayers Cognitive Theory of Multimedia Learning. Mayers Multimedia Learning Theory proposes twelve principles that shape the design and organization of multimedia presentations to improve learning and reduce cognitive load. For example, the Prior Knowledge principle suggests and highlights different learning outcomes for Novice and Non-Novice learners, respectively. Little research, however, is available to support this principle in modified domains (e.g., sports tactics and strategy). As a foundation for further research, this paper compares and contrasts a range of contemporary multimedia sports coaching content and assesses how they perform as learning tools for Strategic and Tactical Play Acquisition among elite sports practitioners. The stress tests applied are guided by Mayers's twelve Multimedia Learning Principles. The focus is on the elite athletes and whether current coaching digital media content does foster improved sports learning among this cohort. The sport of Gaelic Football was selected as it has high strategic and tactical play content, a wide range of Practitioner skill levels (Novice to Elite), and also a significant volume of Multimedia Coaching Content available for analysis. It is hoped the resulting data will help identify and inform the future instructional content design and delivery for Sports Practitioners and help promote best design practices optimal for different levels of expertise.

Keywords: multimedia learning, e-learning, design for learning, ICT

Procedia PDF Downloads 84
8438 An Analysis of a Canadian Personalized Learning Curriculum

Authors: Ruthanne Tobin

Abstract:

The shift to a personalized learning (PL) curriculum in Canada represents an innovative approach to teaching and learning that is also evident in various initiatives across the 32-nation OECD. The premise behind PL is that empowering individual learners to have more input into how they access and construct knowledge, and express their understanding of it, will result in more meaningful school experiences and academic success. In this paper presentation, the author reports on a document analysis of the new curriculum in the province of British Columbia. Three theoretical frameworks are used to analyze the new curriculum. Framework 1 focuses on five dominant aspects (FDA) of PL at the classroom level. Framework 2 focuses on conceptualizing and enacting personalized learning (CEPL) within three spheres of influence. Framework 3 focuses on the integration of three types of knowledge (content, technological, and pedagogical). Analysis is ongoing, but preliminary findings suggest that the new curriculum addresses framework 1 quite well, which identifies five areas of personalized learning: 1) assessment for learning; 2) effective teaching and learning; 3) curriculum entitlement (choice); 4) school organization; and 5) “beyond the classroom walls” (learning in the community). Framework 2 appears to be less well developed in the new curriculum. This framework speaks to the dynamics of PL within three spheres of interaction: 1) nested agency, comprised of overarching constraints [and enablers] from policy makers, school administrators and community; 2) relational agency, which refers to a capacity for professionals to develop a network of expertise to serve shared goals; and 3) students’ personalized learning experience, which integrates differentiation with self-regulation strategies. Framework 3 appears to be well executed in the new PL curriculum, as it employs the theoretical model of technological, pedagogical content knowledge (TPACK) in which there are three interdependent bodies of knowledge. Notable within this framework is the emphasis on the pairing of technologies with excellent pedagogies to significantly assist students and teachers. This work will be of high relevance to educators interested in innovative school reform.

Keywords: curriculum reform, K-12 school change, innovations in education, personalized learning

Procedia PDF Downloads 262
8437 Efficient Feature Fusion for Noise Iris in Unconstrained Environment

Authors: Yao-Hong Tsai

Abstract:

This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition.

Keywords: image fusion, iris recognition, local binary pattern, wavelet

Procedia PDF Downloads 355
8436 A Parallel Implementation of k-Means in MATLAB

Authors: Dimitris Varsamis, Christos Talagkozis, Alkiviadis Tsimpiris, Paris Mastorocostas

Abstract:

The aim of this work is the parallel implementation of k-means in MATLAB, in order to reduce the execution time. Specifically, a new function in MATLAB for serial k-means algorithm is developed, which meets all the requirements for the conversion to a function in MATLAB with parallel computations. Additionally, two different variants for the definition of initial values are presented. In the sequel, the parallel approach is presented. Finally, the performance tests for the computation times respect to the numbers of features and classes are illustrated.

Keywords: K-means algorithm, clustering, parallel computations, Matlab

Procedia PDF Downloads 372
8435 Analysis and Rule Extraction of Coronary Artery Disease Data Using Data Mining

Authors: Rezaei Hachesu Peyman, Oliyaee Azadeh, Salahzadeh Zahra, Alizadeh Somayyeh, Safaei Naser

Abstract:

Coronary Artery Disease (CAD) is one major cause of disability in adults and one main cause of death in developed. In this study, data mining techniques including Decision Trees, Artificial neural networks (ANNs), and Support Vector Machine (SVM) analyze CAD data. Data of 4948 patients who had suffered from heart diseases were included in the analysis. CAD is the target variable, and 24 inputs or predictor variables are used for the classification. The performance of these techniques is compared in terms of sensitivity, specificity, and accuracy. The most significant factor influencing CAD is chest pain. Elderly males (age > 53) have a high probability to be diagnosed with CAD. SVM algorithm is the most useful way for evaluation and prediction of CAD patients as compared to non-CAD ones. Application of data mining techniques in analyzing coronary artery diseases is a good method for investigating the existing relationships between variables.

Keywords: classification, coronary artery disease, data-mining, knowledge discovery, extract

Procedia PDF Downloads 642
8434 Oil Pollution Analysis of the Ecuadorian Rainforest Using Remote Sensing Methods

Authors: Juan Heredia, Naci Dilekli

Abstract:

The Ecuadorian Rainforest has been polluted for almost 60 years with little to no regard to oversight, law, or regulations. The consequences have been vast environmental damage such as pollution and deforestation, as well as sickness and the death of many people and animals. The aim of this paper is to quantify and localize the polluted zones, which something that has not been conducted and is the first step for remediation. To approach this problem, multi-spectral Remote Sensing imagery was utilized using a novel algorithm developed for this study, based on four normalized indices available in the literature. The algorithm classifies the pixels in polluted or healthy ones. The results of this study include a new algorithm for pixel classification and quantification of the polluted area in the selected image. Those results were finally validated by ground control points found in the literature. The main conclusion of this work is that using hyperspectral images, it is possible to identify polluted vegetation. The future work is environmental remediation, in-situ tests, and more extensive results that would inform new policymaking.

Keywords: remote sensing, oil pollution quatification, amazon forest, hyperspectral remote sensing

Procedia PDF Downloads 143
8433 Impact of Overall Teaching Program of Anatomy in Learning: A Students Perspective

Authors: Mamatha Hosapatna, Anne D. Souza, Antony Sylvan Dsouza, Vrinda Hari Ankolekar

Abstract:

Our study intends to know the effect of the overall teaching program of Anatomy on a students learning. The advancement of various teaching methodologies in the present era has led to progressive changes in education. A student should be able to correlate well between the theory and practical knowledge attained even in the early years of their education in medicine and should be able to implement the same in patient care. The present study therefore aims to assess the impact the current anatomy teaching program has on a students learning and to what extent is it successful in making the learning program effective. Specific objectives of our study to assess the impact of overall teaching program of Anatomy in a students’ learning. Description of process proposed: A questionnaire will be constructed and the students will be asked to put forth their views regarding the Anatomy teaching program and its method of assessment. Suggestions, if any will also be encouraged to be put forth. Type of study is cross sectional observations. Target population is the first year MBBS students and sample size is 250. Assessment plan is to obtaining students responses using questionnaire. Calculating percentages of the responses obtained. Tabulation of the results will be done.

Keywords: anatomy, observational study questionnaire, observational study, M.B.B.S students

Procedia PDF Downloads 477
8432 Development of an Automatic Control System for ex vivo Heart Perfusion

Authors: Pengzhou Lu, Liming Xin, Payam Tavakoli, Zhonghua Lin, Roberto V. P. Ribeiro, Mitesh V. Badiwala

Abstract:

Ex vivo Heart Perfusion (EVHP) has been developed as an alternative strategy to expand cardiac donation by enabling resuscitation and functional assessment of hearts donated from marginal donors, which were previously not accepted. EVHP parameters, such as perfusion flow (PF) and perfusion pressure (PP) are crucial for optimal organ preservation. However, with the heart’s constant physiological changes during EVHP, such as coronary vascular resistance, manual control of these parameters is rendered imprecise and cumbersome for the operator. Additionally, low control precision and the long adjusting time may lead to irreversible damage to the myocardial tissue. To solve this problem, an automatic heart perfusion system was developed by applying a Human-Machine Interface (HMI) and a Programmable-Logic-Controller (PLC)-based circuit to control PF and PP. The PLC-based control system collects the data of PF and PP through flow probes and pressure transducers. It has two control modes: the RPM-flow mode and the pressure mode. The RPM-flow control mode is an open-loop system. It influences PF through providing and maintaining the desired speed inputted through the HMI to the centrifugal pump with a maximum error of 20 rpm. The pressure control mode is a closed-loop system where the operator selects a target Mean Arterial Pressure (MAP) to control PP. The inputs of the pressure control mode are the target MAP, received through the HMI, and the real MAP, received from the pressure transducer. A PID algorithm is applied to maintain the real MAP at the target value with a maximum error of 1mmHg. The precision and control speed of the RPM-flow control mode were examined by comparing the PLC-based system to an experienced operator (EO) across seven RPM adjustment ranges (500, 1000, 2000 and random RPM changes; 8 trials per range) tested in a random order. System’s PID algorithm performance in pressure control was assessed during 10 EVHP experiments using porcine hearts. Precision was examined through monitoring the steady-state pressure error throughout perfusion period, and stabilizing speed was tested by performing two MAP adjustment changes (4 trials per change) of 15 and 20mmHg. A total of 56 trials were performed to validate the RPM-flow control mode. Overall, the PLC-based system demonstrated the significantly faster speed than the EO in all trials (PLC 1.21±0.03, EO 3.69±0.23 seconds; p < 0.001) and greater precision to reach the desired RPM (PLC 10±0.7, EO 33±2.7 mean RPM error; p < 0.001). Regarding pressure control, the PLC-based system has the median precision of ±1mmHg error and the median stabilizing times in changing 15 and 20mmHg of MAP are 15 and 19.5 seconds respectively. The novel PLC-based control system was 3 times faster with 60% less error than the EO for RPM-flow control. In pressure control mode, it demonstrates a high precision and fast stabilizing speed. In summary, this novel system successfully controlled perfusion flow and pressure with high precision, stability and a fast response time through a user-friendly interface. This design may provide a viable technique for future development of novel heart preservation and assessment strategies during EVHP.

Keywords: automatic control system, biomedical engineering, ex-vivo heart perfusion, human-machine interface, programmable logic controller

Procedia PDF Downloads 161
8431 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 248
8430 Verification & Validation of Map Reduce Program Model for Parallel K-Mediod Algorithm on Hadoop Cluster

Authors: Trapti Sharma, Devesh Kumar Srivastava

Abstract:

This paper is basically a analysis study of above MapReduce implementation and also to verify and validate the MapReduce solution model for Parallel K-Mediod algorithm on Hadoop Cluster. MapReduce is a programming model which authorize the managing of huge amounts of data in parallel, on a large number of devices. It is specially well suited to constant or moderate changing set of data since the implementation point of a position is usually high. MapReduce has slowly become the framework of choice for “big data”. The MapReduce model authorizes for systematic and instant organizing of large scale data with a cluster of evaluate nodes. One of the primary affect in Hadoop is how to minimize the completion length (i.e. makespan) of a set of MapReduce duty. In this paper, we have verified and validated various MapReduce applications like wordcount, grep, terasort and parallel K-Mediod clustering algorithm. We have found that as the amount of nodes increases the completion time decreases.

Keywords: hadoop, mapreduce, k-mediod, validation, verification

Procedia PDF Downloads 355
8429 Fingerprint Image Encryption Using a 2D Chaotic Map and Elliptic Curve Cryptography

Authors: D. M. S. Bandara, Yunqi Lei, Ye Luo

Abstract:

Fingerprints are suitable as long-term markers of human identity since they provide detailed and unique individual features which are difficult to alter and durable over life time. In this paper, we propose an algorithm to encrypt and decrypt fingerprint images by using a specially designed Elliptic Curve Cryptography (ECC) procedure based on block ciphers. In addition, to increase the confusing effect of fingerprint encryption, we also utilize a chaotic-behaved method called Arnold Cat Map (ACM) for a 2D scrambling of pixel locations in our method. Experimental results are carried out with various types of efficiency and security analyses. As a result, we demonstrate that the proposed fingerprint encryption/decryption algorithm is advantageous in several different aspects including efficiency, security and flexibility. In particular, using this algorithm, we achieve a margin of about 0.1% in the test of Number of Pixel Changing Rate (NPCR) values comparing to the-state-of-the-art performances.

Keywords: arnold cat map, biometric encryption, block cipher, elliptic curve cryptography, fingerprint encryption, Koblitz’s encoding

Procedia PDF Downloads 191
8428 HR MRI CS Based Image Reconstruction

Authors: Krzysztof Malczewski

Abstract:

Magnetic Resonance Imaging (MRI) reconstruction algorithm using compressed sensing is presented in this paper. It is exhibited that the offered approach improves MR images spatial resolution in circumstances when highly undersampled k-space trajectories are applied. Compressed Sensing (CS) aims at signal and images reconstructing from significantly fewer measurements than were conventionally assumed necessary. Magnetic Resonance Imaging (MRI) is a fundamental medical imaging method struggles with an inherently slow data acquisition process. The use of CS to MRI has the potential for significant scan time reductions, with visible benefits for patients and health care economics. In this study the objective is to combine super-resolution image enhancement algorithm with CS framework benefits to achieve high resolution MR output image. Both methods emphasize on maximizing image sparsity on known sparse transform domain and minimizing fidelity. The presented algorithm considers the cardiac and respiratory movements.

Keywords: super-resolution, MRI, compressed sensing, sparse-sense, image enhancement

Procedia PDF Downloads 414
8427 A Comparative Study on Automatic Feature Classification Methods of Remote Sensing Images

Authors: Lee Jeong Min, Lee Mi Hee, Eo Yang Dam

Abstract:

Geospatial feature extraction is a very important issue in the remote sensing research. In the meantime, the image classification based on statistical techniques, but, in recent years, data mining and machine learning techniques for automated image processing technology is being applied to remote sensing it has focused on improved results generated possibility. In this study, artificial neural network and decision tree technique is applied to classify the high-resolution satellite images, as compared to the MLC processing result is a statistical technique and an analysis of the pros and cons between each of the techniques.

Keywords: remote sensing, artificial neural network, decision tree, maximum likelihood classification

Procedia PDF Downloads 337
8426 Triangulations via Iterated Largest Angle Bisection

Authors: Yeonjune Kang

Abstract:

A triangulation of a planar region is a partition of that region into triangles. In the finite element method, triangulations are often used as the grid underlying a computation. In order to be suitable as a finite element mesh, a triangulation must have well-shaped triangles, according to criteria that depend on the details of the particular problem. For instance, most methods require that all triangles be small and as close to the equilateral shape as possible. Stated differently, one wants to avoid having either thin or flat triangles in the triangulation. There are many triangulation procedures, a particular one being the one known as the longest edge bisection algorithm described below. Starting with a given triangle, locate the midpoint of the longest edge and join it to the opposite vertex of the triangle. Two smaller triangles are formed; apply the same bisection procedure to each of these triangles. Continuing in this manner after n steps one obtains a triangulation of the initial triangle into 2n smaller triangles. The longest edge algorithm was first considered in the late 70’s. It was shown by various authors that this triangulation has the desirable properties for the finite element method: independently of the number of iterations the angles of these triangles cannot get too small; moreover, the size of the triangles decays exponentially. In the present paper we consider a related triangulation algorithm we refer to as the largest angle bisection procedure. As the name suggests, rather than bisecting the longest edge, at each step we bisect the largest angle. We study the properties of the resulting triangulation and prove that, while the general behavior resembles the one in the longest edge bisection algorithm, there are several notable differences as well.

Keywords: angle bisectors, geometry, triangulation, applied mathematics

Procedia PDF Downloads 381
8425 Traditional Drawing, BIM and Erudite Design Process

Authors: Maryam Kalkatechi

Abstract:

Nowadays, parametric design, scientific analysis, and digital fabrication are dominant. Many architectural practices are increasingly seeking to incorporate advanced digital software and fabrication in their projects. Proposing an erudite design process that combines digital and practical aspects in a strong frame within the method was resulted from the dissertation research. The digital aspects are the progressive advancements in algorithm design and simulation software. These aspects have assisted the firms to develop more holistic concepts at the early stage and maintain collaboration among disciplines during the design process. The erudite design process enhances the current design processes by encouraging the designer to implement the construction and architecture knowledge within the algorithm to make successful design processes. The erudite design process also involves the ongoing improvements of applying the new method of 3D printing in construction. This is achieved through the ‘data-sketches’. The term ‘data-sketch’ was developed by the author in the dissertation that was recently completed. It accommodates the decisions of the architect on the algorithm. This paper introduces the erudite design process and its components. It will summarize the application of this process in development of the ‘3D printed construction unit’. This paper contributes to overlaying the academic and practice with advanced technology by presenting a design process that transfers the dominance of tool to the learned architect and encourages innovation in design processes.

Keywords: erudite, data-sketch, algorithm design in architecture, design process

Procedia PDF Downloads 262
8424 Facial Biometric Privacy Using Visual Cryptography: A Fundamental Approach to Enhance the Security of Facial Biometric Data

Authors: Devika Tanna

Abstract:

'Biometrics' means 'life measurement' but the term is usually associated with the use of unique physiological characteristics to identify an individual. It is important to secure the privacy of digital face image that is stored in central database. To impart privacy to such biometric face images, first, the digital face image is split into two host face images such that, each of it gives no idea of existence of the original face image and, then each cover image is stored in two different databases geographically apart. When both the cover images are simultaneously available then only we can access that original image. This can be achieved by using the XM2VTS and IMM face database, an adaptive algorithm for spatial greyscale. The algorithm helps to select the appropriate host images which are most likely to be compatible with the secret image stored in the central database based on its geometry and appearance. The encryption is done using GEVCS which results in a reconstructed image identical to the original private image.

Keywords: adaptive algorithm, database, host images, privacy, visual cryptography

Procedia PDF Downloads 113
8423 Optimizing Water Consumption of a Washer-Dryer Which Contains Water Condensation Technology under a Constraint of Energy Consumption and Drying Performance

Authors: Aysegul Sarac

Abstract:

Washer-dryers are the machines which can either wash the laundries or can dry them. In other words, we can define a washer-dryer as a washing machine and a dryer in one machine. Washing machines are characterized by the loading capacity, cabinet depth and spin speed. Dryers are characterized by the drying technology. On the other hand, energy efficiency, water consumption, and noise levels are main characteristics that influence customer decisions to buy washers. Water condensation technology is the most common drying technology existing in the washer-dryer market. Water condensation technology uses water to dry the laundry inside the machine. Thus, in this type of the drying technology water consumption is at high levels comparing other technologies. Water condensation technology sprays cold water in the drum to condense the humidity of hot weather in order to dry the laundry inside. Thus, water consumption influences the drying performance. The scope of this study is to optimize water consumption during drying process under a constraint of energy consumption and drying performance. We are using 6-Sigma methodology to find the optimum water consumption by comparing drying performances of different drying algorithms.

Keywords: optimization, 6-Sigma methodology, washer-dryers, water condensation technology

Procedia PDF Downloads 348
8422 A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting

Authors: Analise Borg, Paul Micallef

Abstract:

Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature.

Keywords: audio fingerprinting, mapping algorithm, Gaussian Mixture Models, MFCC, MPEG-7

Procedia PDF Downloads 403
8421 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 130
8420 Achieving Shear Wave Elastography by a Three-element Probe for Wearable Human-machine Interface

Authors: Jipeng Yan, Xingchen Yang, Xiaowei Zhou, Mengxing Tang, Honghai Liu

Abstract:

Shear elastic modulus of skeletal muscles can be obtained by shear wave elastography (SWE) and has been linearly related to muscle force. However, SWE is currently implemented using array probes. Price and volumes of these probes and their driving equipment prevent SWE from being used in wearable human-machine interfaces (HMI). Moreover, beamforming processing for array probes reduces the real-time performance. To achieve SWE by wearable HMIs, a customized three-element probe is adopted in this work, with one element for acoustic radiation force generation and the others for shear wave tracking. In-phase quadrature demodulation and 2D autocorrelation are adopted to estimate velocities of tissues on the sound beams of the latter two elements. Shear wave speeds are calculated by phase shift between the tissue velocities. Three agar phantoms with different elasticities were made by changing the weights of agar. Values of the shear elastic modulus of the phantoms were measured as 8.98, 23.06 and 36.74 kPa at a depth of 7.5 mm respectively. This work verifies the feasibility of measuring shear elastic modulus by wearable devices.

Keywords: shear elastic modulus, skeletal muscle, ultrasound, wearable human-machine interface

Procedia PDF Downloads 136
8419 Visual Speech Perception of Arabic Emphatics

Authors: Maha Saliba Foster

Abstract:

Speech perception has been recognized as a bi-sensory process involving the auditory and visual channels. Compared to the auditory modality, the contribution of the visual signal to speech perception is not very well understood. Studying how the visual modality affects speech recognition can have pedagogical implications in second language learning, as well as clinical application in speech therapy. The current investigation explores the potential effect of speech visual cues on the perception of Arabic emphatics (AEs). The corpus consists of 36 minimal pairs each containing two contrasting consonants, an AE versus a non-emphatic (NE). Movies of four Lebanese speakers were edited to allow perceivers to have partial view of facial regions: lips only, lips-cheeks, lips-chin, lips-cheeks-chin, lips-cheeks-chin-neck. In the absence of any auditory information and relying solely on visual speech, perceivers were above chance at correctly identifying AEs or NEs across vowel contexts; moreover, the models were able to predict the probability of perceivers’ accuracy in identifying some of the COIs produced by certain speakers; additionally, results showed an overlap between the measurements selected by the computer and those selected by human perceivers. The lack of significant face effect on the perception of AEs seems to point to the lips, present in all of the videos, as the most important and often sufficient facial feature for emphasis recognition. Future investigations will aim at refining the analyses of visual cues used by perceivers by using Principal Component Analysis and including time evolution of facial feature measurements.

Keywords: Arabic emphatics, machine learning, speech perception, visual speech perception

Procedia PDF Downloads 294
8418 The Factors Affecting the Use of Massive Open Online Courses in Blended Learning by Lecturers in Universities

Authors: Taghreed Alghamdi, Wendy Hall, David Millard

Abstract:

Massive Open Online Courses (MOOCs) have recently gained widespread interest in the academic world, starting a wide range of discussion of a number of issues. One of these issues, using MOOCs in teaching and learning in the higher education by integrating MOOCs’ contents with traditional face-to-face activities in blended learning format, is called blended MOOCs (bMOOCs) and is intended not to replace traditional learning but to enhance students learning. Most research on MOOCs has focused on students’ perception and institutional threats whereas there is a lack of published research on academics’ experiences and practices. Thus, the first aim of the study is to develop a classification of blended MOOCs models by conducting a systematic literature review, classifying 19 different case studies, and identifying the broad types of bMOOCs models namely: Supplementary Model and Integrated Model. Thus, the analyses phase will emphasize on these different types of bMOOCs models in terms of adopting MOOCs by lecturers. The second aim of the study is to improve the understanding of lecturers’ acceptance of bMOOCs by investigate the factors that influence academics’ acceptance of using MOOCs in traditional learning by distributing an online survey to lecturers who participate in MOOCs platforms. These factors can help institutions to encourage their lecturers to integrate MOOCs with their traditional courses in universities.

Keywords: acceptance, blended learning, blended MOOCs, higher education, lecturers, MOOCs, professors

Procedia PDF Downloads 123