Search results for: Computer aided sport training
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2334

Search results for: Computer aided sport training

1914 Needs Analysis Survey of Hearing Impaired Students’ Teachers in Elementary Schools for Designing Curriculum Plans and Improving Human Resources

Authors: F. Rashno Seydari, M. Nikafrooz

Abstract:

This paper intends to study needs analysis of hearing-impaired students’ teachers in elementary schools all over Iran. The subjects of this study were 275 teachers who were teaching hearing-impaired students in elementary schools. The participants were selected by a quota sampling method. To collect the data, questionnaires of training needs consisting of 41 knowledge items and 31 performance items were used. The collected data were analyzed by using SPSS software in the form of descriptive analyses (frequency and mean) and inferential analyses (one sample t-test, paired t-test, independent t-test, and Pearson correlation coefficient). The findings of the study indicated that teachers generally have considerable needs in knowledge and performance domains. In 32 items out of the total 41 knowledge domain items and in the 27 items out of the total 31 performance domain items, the teachers had considerable needs. From the quantitative point of view, the needs of the performance domain were more than those of the knowledge domain, so they have to be considered as the first priority in training these teachers. There was no difference between the level of the needs of male and female teachers. There was a significant difference between the knowledge and performance domain needs and the teachers’ teaching experience, 0.354 and 0.322 respectively. The teachers who had been trained in working with hearing-impaired students expressed more training needs (both knowledge and performance).

Keywords: Needs analysis, hearing impaired students, hearing impaired students’ teachers, knowledge domain, performance domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 434
1913 A Social Cognitive Investigation in the Context of Vocational Training Performance of People with Disabilities

Authors: Majid A. AlSayari

Abstract:

The study reported here investigated social cognitive theory (SCT) in the context of Vocational Rehab (VR) for people with disabilities. The prime purpose was to increase knowledge of VR phenomena and make recommendations for improving VR services. The sample consisted of 242 persons with Spinal Cord Injuries (SCI) who completed questionnaires. A further 32 participants were Trainers. Analysis of questionnaire data was carried out using factor analysis, multiple regression analysis, and thematic analysis. The analysis suggested that, in motivational terms, and consistent with research carried out in other academic contexts, self-efficacy was the best predictor of VR performance. The author concludes that that VR self-efficacy predicted VR training performance.

Keywords: Social cognitive theory, vocational rehab, self-efficacy, proxy efficacy, people with disabilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 740
1912 Hybrid Approach for Memory Analysis in Windows System

Authors: Khairul Akram Zainol Ariffin, Ahmad Kamil Mahmood, Jafreezal Jaafar, Solahuddin Shamsuddin

Abstract:

Random Access Memory (RAM) is an important device in computer system. It can represent the snapshot on how the computer has been used by the user. With the growth of its importance, the computer memory has been an issue that has been discussed in digital forensics. A number of tools have been developed to retrieve the information from the memory. However, most of the tools have their limitation in the ability of retrieving the important information from the computer memory. Hence, this paper is aimed to discuss the limitation and the setback for two main techniques such as process signature search and process enumeration. Then, a new hybrid approach will be presented to minimize the setback in both individual techniques. This new approach combines both techniques with the purpose to retrieve the information from the process block and other objects in the computer memory. Nevertheless, the basic theory in address translation for x86 platforms will be demonstrated in this paper.

Keywords: Algorithms, Digital Forensics, Memory Analysis, Signature Search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
1911 Addressing Scalability Issues of Named Entity Recognition Using Multi-Class Support Vector Machines

Authors: Mona Soliman Habib

Abstract:

This paper explores the scalability issues associated with solving the Named Entity Recognition (NER) problem using Support Vector Machines (SVM) and high-dimensional features. The performance results of a set of experiments conducted using binary and multi-class SVM with increasing training data sizes are examined. The NER domain chosen for these experiments is the biomedical publications domain, especially selected due to its importance and inherent challenges. A simple machine learning approach is used that eliminates prior language knowledge such as part-of-speech or noun phrase tagging thereby allowing for its applicability across languages. No domain-specific knowledge is included. The accuracy measures achieved are comparable to those obtained using more complex approaches, which constitutes a motivation to investigate ways to improve the scalability of multiclass SVM in order to make the solution more practical and useable. Improving training time of multi-class SVM would make support vector machines a more viable and practical machine learning solution for real-world problems with large datasets. An initial prototype results in great improvement of the training time at the expense of memory requirements.

Keywords: Named entity recognition, support vector machines, language independence, bioinformatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673
1910 Bayesian Deep Learning Algorithms for Classifying COVID-19 Images

Authors: I. Oloyede

Abstract:

The study investigates the accuracy and loss of deep learning algorithms with the set of coronavirus (COVID-19) images dataset by comparing Bayesian convolutional neural network and traditional convolutional neural network in low dimensional dataset. 50 sets of X-ray images out of which 25 were COVID-19 and the remaining 20 were normal, twenty images were set as training while five were set as validation that were used to ascertained the accuracy of the model. The study found out that Bayesian convolution neural network outperformed conventional neural network at low dimensional dataset that could have exhibited under fitting. The study therefore recommended Bayesian Convolutional neural network (BCNN) for android apps in computer vision for image detection.

Keywords: BCNN, CNN, Images, COVID-19, Deep Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 832
1909 Computer Aided Docking Studies on Antiviral Drugs for SARS

Authors: Virupakshaiah DBM, Chandrakanth Kelmani, Rachanagouda Patil, Prasad Hegade

Abstract:

Severe acute respiratory syndrome (SARS) is a respiratory disease in humans which is caused by the SARS coronavirus. The treatment of coronavirus-associated SARS has been evolving and so far there is no consensus on an optimal regimen. The mainstream therapeutic interventions for SARS involve broad-spectrum antibiotics and supportive care, as well as antiviral agents and immunomodulatory therapy. The Protein- Ligand interaction plays a significant role in structural based drug designing. In the present work we have taken the receptor Angiotensin converting enzyme 2 and identified the drugs that are commonly used against SARS. They are Lopinavir, Ritonavir, Ribavirin, and Oseltamivir. The receptor Angiotensin converting enzyme 2 (ACE-2) was docked with above said drugs and the energy value obtained are as follows, Lopinavir (-292.3), Ritonavir (-325.6), Oseltamivir (- 229.1), Ribavirin (-208.8). Depending on the least energy value we have chosen the best two drugs out of the four conventional drugs. We tried to improve the binding efficiency and steric compatibility of the two drugs namely Ritonavir and Lopinavir. Several modifications were made to the probable functional groups (phenylic, ketonic groups in case of Ritonavir and carboxylic groups in case of Lopinavir respectively) which were interacting with the receptor molecule. Analogs were prepared by Marvin Sketch software and were docked using HEX docking software. Lopinavir analog 8 and Ritonavir analog 11 were detected with significant energy values and are probable lead molecule. It infers that some of the modified drugs are better than the original drugs. Further work can be carried out to improve the steric compatibility of the drug based upon the work done above for a more energy efficient binding of the drugs to the receptor.

Keywords: Protein data bank, Rasmol, Marvin sketch, Hexdocking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2244
1908 Analysis of the Impact of NVivo and EndNote on Academic Research Productivity

Authors: Sujit K. Basak

Abstract:

The aim of this paper is to analyze the impact of literature review software on researchers. The aim of this study was achieved by analyzing models in terms of perceived usefulness, perceived ease of use, and acceptance level. Collected data were analyzed using WarpPLS 4.0 software. This study used two theoretical frameworks, namely, Technology Acceptance Model and the Training Needs Assessment Model. The study was experimental and was conducted at a public university in South Africa. The results of the study showed that acceptance level has a high impact on research productivity followed by perceived usefulness and perceived ease of use.

Keywords: Technology acceptance model, training needs assessment model, literature review software, research productivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2961
1907 Determination of Electromagnetic Properties of Human Tissues

Authors: Iliana Marinova, Valentin Mateev

Abstract:

In this paper a computer system for electromagnetic properties measurements is designed. The system employs Agilent 4294A precision impedance analyzer to measure the amplitude and the phase of a signal applied over a tested biological tissue sample. Measured by the developed computer system data could be used for tissue characterization in wide frequency range from 40Hz to 110MHz. The computer system can interface with output devices acquiring flexible testing process.

Keywords: Electromagnetic properties, human tissue, bioimpedance, measurement system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2408
1906 Improving RBF Networks Classification Performance by using K-Harmonic Means

Authors: Z. Zainuddin, W. K. Lye

Abstract:

In this paper, a clustering algorithm named KHarmonic means (KHM) was employed in the training of Radial Basis Function Networks (RBFNs). KHM organized the data in clusters and determined the centres of the basis function. The popular clustering algorithms, namely K-means (KM) and Fuzzy c-means (FCM), are highly dependent on the initial identification of elements that represent the cluster well. In KHM, the problem can be avoided. This leads to improvement in the classification performance when compared to other clustering algorithms. A comparison of the classification accuracy was performed between KM, FCM and KHM. The classification performance is based on the benchmark data sets: Iris Plant, Diabetes and Breast Cancer. RBFN training with the KHM algorithm shows better accuracy in classification problem.

Keywords: Neural networks, Radial basis functions, Clusteringmethod, K-harmonic means.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1831
1905 Improved Back Propagation Algorithm to Avoid Local Minima in Multiplicative Neuron Model

Authors: Kavita Burse, Manish Manoria, Vishnu P. S. Kirar

Abstract:

The back propagation algorithm calculates the weight changes of artificial neural networks, and a common approach is to use a training algorithm consisting of a learning rate and a momentum factor. The major drawbacks of above learning algorithm are the problems of local minima and slow convergence speeds. The addition of an extra term, called a proportional factor reduces the convergence of the back propagation algorithm. We have applied the three term back propagation to multiplicative neural network learning. The algorithm is tested on XOR and parity problem and compared with the standard back propagation training algorithm.

Keywords: Three term back propagation, multiplicative neuralnetwork, proportional factor, local minima.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2789
1904 Memorabilia of Suan Sunandha through Interactive User Interface

Authors: Nalinee Sophatsathit

Abstract:

The objectives of memorabilia of Suan Sunandha are to develop a general knowledge presentation about the historical royal garden through interactive graphic simulation technique and to employ high-functionality context in enhancing interactive user navigation. The approach infers non-intrusive display of relevant history in response to situational context. User’s navigation runs through the virtual reality campus, consisting of new and restored buildings. A flash back presentation of information pertaining to the history in the form of photos, paintings, and textual descriptions are displayed along each passing-by building. To keep the presentation lively, graphical simulation is created in a serendipity game play so that the user can both learn and enjoy the educational tour. The benefits of this human-computer interaction development are two folds. First, lively presentation technique and situational context modeling are developed that entail a usable paradigm of knowledge and information presentation combinations. Second, cost effective training and promotion for both internal personnel and public visitors to learn and keep informed of this historical royal garden can be furnished without the need for a dedicated public relations service. Future improvement on graphic simulation and ability based display can extend this work to be more realistic, user-friendly, and informative for all.

Keywords: Interactive user navigation, high-functionality context, situational context, human-computer interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583
1903 A Post Processing Method for Quantum Prime Factorization Algorithm based on Randomized Approach

Authors: Mir Shahriar Emami, Mohammad Reza Meybodi

Abstract:

Prime Factorization based on Quantum approach in two phases has been performed. The first phase has been achieved at Quantum computer and the second phase has been achieved at the classic computer (Post Processing). At the second phase the goal is to estimate the period r of equation xrN ≡ 1 and to find the prime factors of the composite integer N in classic computer. In this paper we present a method based on Randomized Approach for estimation the period r with a satisfactory probability and the composite integer N will be factorized therefore with the Randomized Approach even the gesture of the period is not exactly the real period at least we can find one of the prime factors of composite N. Finally we present some important points for designing an Emulator for Quantum Computer Simulation.

Keywords: Quantum Prime Factorization, RandomizedAlgorithms, Quantum Computer Simulation, Quantum Computation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475
1902 An Interactive e-Learning Management System (e-LMS): A Solution to Tanzanian Secondary Schools' Education

Authors: A. Ellen Kalinga, R. B. Burchard Bagile, Lena Trojer

Abstract:

Information and Communications Technologies (ICT) has been integrated in education in many developing and developed countries alike, but the use of ICT in Tanzanian schools is dismal. Many Tanzanian secondary schools have no computers. The few schools with computers use them primarily for secretarial services and computer literacy training. The Tanzanian education system at other levels like secondary school level has to undergo substantial transformation, underscored by the growing application of new information and communication technology. This paper presents the e-readiness survey result from secondary schools in Tanzania. The paper also suggests how Tanzania can make use of the few present ICT resources to support and improve teaching and learning functions to improve performance and acquisition of knowledge by using e-Learning Management System (e-LMS).

Keywords: e-Learning, ICT, Object-Oriented, Participatorydesign.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2747
1901 In Search of a Suitable Neural Network Capable of Fast Monitoring of Congestion Level in Electric Power Systems

Authors: Pradyumna Kumar Sahoo, Prasanta Kumar Satpathy

Abstract:

This paper aims at finding a suitable neural network for monitoring congestion level in electrical power systems. In this paper, the input data has been framed properly to meet the target objective through supervised learning mechanism by defining normal and abnormal operating conditions for the system under study. The congestion level, expressed as line congestion index (LCI), is evaluated for each operating condition and is presented to the NN along with the bus voltages to represent the input and target data. Once, the training goes successful, the NN learns how to deal with a set of newly presented data through validation and testing mechanism. The crux of the results presented in this paper rests on performance comparison of a multi-layered feed forward neural network with eleven types of back propagation techniques so as to evolve the best training criteria. The proposed methodology has been tested on the standard IEEE-14 bus test system with the support of MATLAB based NN toolbox. The results presented in this paper signify that the Levenberg-Marquardt backpropagation algorithm gives best training performance of all the eleven cases considered in this paper, thus validating the proposed methodology.

Keywords: Line congestion index, critical bus, contingency, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1769
1900 Reasoning with Dynamic Domains and Computer Security

Authors: Yun Bai

Abstract:

Representing objects in a dynamic domain is essential in commonsense reasoning under some circumstances. Classical logics and their nonmonotonic consequences, however, are usually not able to deal with reasoning with dynamic domains due to the fact that every constant in the logical language denotes some existing object in the static domain. In this paper, we explore a logical formalization which allows us to represent nonexisting objects in commonsense reasoning. A formal system named N-theory is proposed for this purpose and its possible application in computer security is briefly discussed.

Keywords: knowledge representation and reasoning, commonsensereasoning, computer security

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1421
1899 Computer Verification in Cryptography

Authors: Markus Kaiser, Johannes Buchmann

Abstract:

In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.

Keywords: prime numbers, primality tests, (conditional) proba¬bility distributions, formal proof system, higher-order logic, formal verification, Bayes' Formula, Miller-Rabin primality test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161
1898 Low Resolution Single Neural Network Based Face Recognition

Authors: Jahan Zeb, Muhammad Younus Javed, Usman Qayyum

Abstract:

This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.

Keywords: Average filtering, Bicubic Interpolation, Neurons, vectorization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732
1897 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: Time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1543
1896 On Dialogue Systems Based on Deep Learning

Authors: Yifan Fan, Xudong Luo, Pingping Lin

Abstract:

Nowadays, dialogue systems increasingly become the way for humans to access many computer systems. So, humans can interact with computers in natural language. A dialogue system consists of three parts: understanding what humans say in natural language, managing dialogue, and generating responses in natural language. In this paper, we survey deep learning based methods for dialogue management, response generation and dialogue evaluation. Specifically, these methods are based on neural network, long short-term memory network, deep reinforcement learning, pre-training and generative adversarial network. We compare these methods and point out the further research directions.

Keywords: Dialogue management, response generation, reinforcement learning, deep learning, evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756
1895 Application of Digital Tools for Improving Learning

Authors: José L. Jiménez

Abstract:

The use of technology in the classroom is an issue that is constantly evolving. Digital age students learn differently than their teachers did, so now the teacher should be constantly evolving their methods and teaching techniques to be more in touch with the student. In this paper a case study presents how were used some of these technologies by accompanying a classroom course, this in order to provide students with a different and innovative experience as their teacher usually presented the activities to develop. As students worked in the various activities, they increased their digital skills by employing unknown tools that helped them in their professional training. The twenty-first century teacher should consider the use of Information and Communication Technologies in the classroom thinking in skills that students of the digital age should possess. It also takes a brief look at the history of distance education and it is also highlighted the importance of integrating technology as part of the student's training.

Keywords: Digital tools, on-line learning, social networks, technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1946
1894 Factors that Contribute to the Improvement of the Sense of Self-Efficacy of Special Educators in Inclusive Settings in Greece

Authors: Sotiria Tzivinikou, Dimitra Kagkara

Abstract:

Teacher’s sense of self-efficacy can affect significantly both teacher’s and student’s performance. More specific, self-efficacy is associated with the learning outcomes as well as student’s motivation and self-efficacy. For example, teachers with high sense of self-efficacy are more open to innovations and invest more effort in teaching. In addition to this, effective inclusive education is associated with higher levels of teacher’s self-efficacy. Pre-service teachers with high levels of self-efficacy could handle student’s behavior better and more effectively assist students with special educational needs. Teacher preparation programs are also important, because teacher’s efficacy beliefs are shaped early in learning, as a result the quality of teacher’s education programs can affect the sense of self-efficacy of pre-service teachers. Usually, a number of pre-service teachers do not consider themselves well prepared to work with students with special educational needs and do not have the appropriate sense of self-efficacy. This study aims to investigate the factors that contribute to the improvement of the sense of self-efficacy of pre-service special educators by using an academic practicum training program. The sample of this study is 159 pre-service special educators, who also participated in the academic practicum training program. For the purpose of this study were used quantitative methods for data collection and analysis. Teacher’s self-efficacy was assessed by the teachers themselves with the completion of a questionnaire which was based on the scale of Teacher’s Sense of Efficacy Scale. Pre and post measurements of teacher’s self-efficacy were taken. The results of the survey are consistent with those of the international literature. The results indicate that a significant number of pre-service special educators do not hold the appropriate sense of self-efficacy regarding teaching students with special educational needs. Moreover, a quality academic training program constitutes a crucial factor for the improvement of the sense of self-efficacy of pre-service special educators, as additional for the provision of high quality inclusive education.

Keywords: Inclusive education, pre-service, self-efficacy, training program.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 882
1893 Integrating Visual Modeling throughout the Computer Science Curriculum

Authors: Carol B.Collins, M. H. N Tabrizi

Abstract:

The purposes of this paper are to (1) promote excellence in computer science by suggesting a cohesive innovative approach to fill well documented deficiencies in current computer science education, (2) justify (using the authors- and others anecdotal evidence from both the classroom and the real world) why this approach holds great potential to successfully eliminate the deficiencies, (3) invite other professionals to join the authors in proof of concept research. The authors- experiences, though anecdotal, strongly suggest that a new approach involving visual modeling technologies should allow computer science programs to retain a greater percentage of prospective and declared majors as students become more engaged learners, more successful problem-solvers, and better prepared as programmers. In addition, the graduates of such computer science programs will make greater contributions to the profession as skilled problem-solvers. Instead of wearily rememorizing code as they move to the next course, students will have the problem-solving skills to think and work in more sophisticated and creative ways.

Keywords: Algorithms, CASE, Problem-solving, UML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1368
1892 The Use of Simulation Programs of Leakage of Harmful Substances for Crisis Management

Authors: Jiří Barta

Abstract:

The paper deals with simulation programs of spread of harmful substances. Air pollution has a direct impact on the quality of human life and environmental protection is currently a very hot topic. Therefore, the paper focuses on the simulation of release of harmful substances. The first part of article deals with perspectives and possibilities of implementation outputs of simulations programs into the system which is education and of practical training of the management staff during emergency events in the frame of critical infrastructure. The last part shows the practical testing and evaluation of simulation programs. Of the tested simulations software been selected Symos97. The tool offers advanced features for setting leakage. Gradually allows the user to model the terrain, location, and method of escape of harmful substances.

Keywords: Computer Simulation, Symos97, spread, simulation software, harmful substances.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 930
1891 Combining ILP with Semi-supervised Learning for Web Page Categorization

Authors: Nuanwan Soonthornphisaj, Boonserm Kijsirikul

Abstract:

This paper presents a semi-supervised learning algorithm called Iterative-Cross Training (ICT) to solve the Web pages classification problems. We apply Inductive logic programming (ILP) as a strong learner in ICT. The objective of this research is to evaluate the potential of the strong learner in order to boost the performance of the weak learner of ICT. We compare the result with the supervised Naive Bayes, which is the well-known algorithm for the text classification problem. The performance of our learning algorithm is also compare with other semi-supervised learning algorithms which are Co-Training and EM. The experimental results show that ICT algorithm outperforms those algorithms and the performance of the weak learner can be enhanced by ILP system.

Keywords: Inductive Logic Programming, Semi-supervisedLearning, Web Page Categorization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625
1890 Performance Analysis of Evolutionary ANN for Output Prediction of a Grid-Connected Photovoltaic System

Authors: S.I Sulaiman, T.K Abdul Rahman, I. Musirin, S. Shaari

Abstract:

This paper presents performance analysis of the Evolutionary Programming-Artificial Neural Network (EPANN) based technique to optimize the architecture and training parameters of a one-hidden layer feedforward ANN model for the prediction of energy output from a grid connected photovoltaic system. The ANN utilizes solar radiation and ambient temperature as its inputs while the output is the total watt-hour energy produced from the grid-connected PV system. EP is used to optimize the regression performance of the ANN model by determining the optimum values for the number of nodes in the hidden layer as well as the optimal momentum rate and learning rate for the training. The EPANN model is tested using two types of transfer function for the hidden layer, namely the tangent sigmoid and logarithmic sigmoid. The best transfer function, neural topology and learning parameters were selected based on the highest regression performance obtained during the ANN training and testing process. It is observed that the best transfer function configuration for the prediction model is [logarithmic sigmoid, purely linear].

Keywords: Artificial neural network (ANN), Correlation coefficient (R), Evolutionary programming-ANN (EPANN), Photovoltaic (PV), logarithmic sigmoid and tangent sigmoid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886
1889 Application of Neural Network and Finite Element for Prediction the Limiting Drawing Ratio in Deep Drawing Process

Authors: H.Mohammadi Majd, M.Jalali Azizpour, A.V. Hoseini

Abstract:

In this paper back-propagation artificial neural network (BPANN) is employed to predict the limiting drawing ratio (LDR) of the deep drawing process. To prepare a training set for BPANN, some finite element simulations were carried out. die and punch radius, die arc radius, friction coefficient, thickness, yield strength of sheet and strain hardening exponent were used as the input data and the LDR as the specified output used in the training of neural network. As a result of the specified parameters, the program will be able to estimate the LDR for any new given condition. Comparing FEM and BPANN results, an acceptable correlation was found.

Keywords: Back-propagation artificial neural network(BPANN), deep drawing, prediction, limiting drawing ratio (LDR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
1888 Application of BP Neural Network Model in Sports Aerobics Performance Evaluation

Authors: Shuhe Shao

Abstract:

This article provides partial evaluation index and its standard of sports aerobics, including the following 12 indexes: health vitality, coordination, flexibility, accuracy, pace, endurance, elasticity, self-confidence, form, control, uniformity and musicality. The three-layer BP artificial neural network model including input layer, hidden layer and output layer is established. The result shows that the model can well reflect the non-linear relationship between the performance of 12 indexes and the overall performance. The predicted value of each sample is very close to the true value, with a relative error fluctuating around of 5%, and the network training is successful. It shows that BP network has high prediction accuracy and good generalization capacity if being applied in sports aerobics performance evaluation after effective training.

Keywords: BP neural network, sports aerobics, performance, evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1601
1887 Machine Learning Development Audit Framework: Assessment and Inspection of Risk and Quality of Data, Model and Development Process

Authors: Jan Stodt, Christoph Reich

Abstract:

The usage of machine learning models for prediction is growing rapidly and proof that the intended requirements are met is essential. Audits are a proven method to determine whether requirements or guidelines are met. However, machine learning models have intrinsic characteristics, such as the quality of training data, that make it difficult to demonstrate the required behavior and make audits more challenging. This paper describes an ML audit framework that evaluates and reviews the risks of machine learning applications, the quality of the training data, and the machine learning model. We evaluate and demonstrate the functionality of the proposed framework by auditing an steel plate fault prediction model.

Keywords: Audit, machine learning, assessment, metrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 947
1886 Personal Information Classification Based on Deep Learning in Automatic Form Filling System

Authors: Shunzuo Wu, Xudong Luo, Yuanxiu Liao

Abstract:

Recently, the rapid development of deep learning makes artificial intelligence (AI) penetrate into many fields, replacing manual work there. In particular, AI systems also become a research focus in the field of automatic office. To meet real needs in automatic officiating, in this paper we develop an automatic form filling system. Specifically, it uses two classical neural network models and several word embedding models to classify various relevant information elicited from the Internet. When training the neural network models, we use less noisy and balanced data for training. We conduct a series of experiments to test my systems and the results show that our system can achieve better classification results.

Keywords: Personal information, deep learning, auto fill, NLP, document analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 820
1885 Prediction the Limiting Drawing Ratio in Deep Drawing Process by Back Propagation Artificial Neural Network

Authors: H.Mohammadi Majd, M.Jalali Azizpour, M. Goodarzi

Abstract:

In this paper back-propagation artificial neural network (BPANN) with Levenberg–Marquardt algorithm is employed to predict the limiting drawing ratio (LDR) of the deep drawing process. To prepare a training set for BPANN, some finite element simulations were carried out. die and punch radius, die arc radius, friction coefficient, thickness, yield strength of sheet and strain hardening exponent were used as the input data and the LDR as the specified output used in the training of neural network. As a result of the specified parameters, the program will be able to estimate the LDR for any new given condition. Comparing FEM and BPANN results, an acceptable correlation was found.

Keywords: BPANN, deep drawing, prediction, limiting drawingratio (LDR), Levenberg–Marquardt algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1834