Search results for: teaching methods.
3769 Predication Model for Leukemia Diseases Based on Data Mining Classification Algorithms with Best Accuracy
Authors: Fahd Sabry Esmail, M. Badr Senousy, Mohamed Ragaie
Abstract:
In recent years, there has been an explosion in the rate of using technology that help discovering the diseases. For example, DNA microarrays allow us for the first time to obtain a "global" view of the cell. It has great potential to provide accurate medical diagnosis, to help in finding the right treatment and cure for many diseases. Various classification algorithms can be applied on such micro-array datasets to devise methods that can predict the occurrence of Leukemia disease. In this study, we compared the classification accuracy and response time among eleven decision tree methods and six rule classifier methods using five performance criteria. The experiment results show that the performance of Random Tree is producing better result. Also it takes lowest time to build model in tree classifier. The classification rules algorithms such as nearest- neighbor-like algorithm (NNge) is the best algorithm due to the high accuracy and it takes lowest time to build model in classification.
Keywords: Data mining, classification techniques, decision tree, classification rule, leukemia diseases, microarray data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25593768 Dimensionality Reduction of PSSM Matrix and its Influence on Secondary Structure and Relative Solvent Accessibility Predictions
Authors: Rafał Adamczak
Abstract:
State-of-the-art methods for secondary structure (Porter, Psi-PRED, SAM-T99sec, Sable) and solvent accessibility (Sable, ACCpro) predictions use evolutionary profiles represented by the position specific scoring matrix (PSSM). It has been demonstrated that evolutionary profiles are the most important features in the feature space for these predictions. Unfortunately applying PSSM matrix leads to high dimensional feature spaces that may create problems with parameter optimization and generalization. Several recently published suggested that applying feature extraction for the PSSM matrix may result in improvements in secondary structure predictions. However, none of the top performing methods considered here utilizes dimensionality reduction to improve generalization. In the present study, we used simple and fast methods for features selection (t-statistics, information gain) that allow us to decrease the dimensionality of PSSM matrix by 75% and improve generalization in the case of secondary structure prediction compared to the Sable server.
Keywords: Secondary structure prediction, feature selection, position specific scoring matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19373767 Models and Metamodels for Computer-Assisted Natural Language Grammar Learning
Authors: Evgeny Pyshkin, Maxim Mozgovoy, Vladislav Volkov
Abstract:
The paper follows a discourse on computer-assisted language learning. We examine problems of foreign language teaching and learning and introduce a metamodel that can be used to define learning models of language grammar structures in order to support teacher/student interaction. Special attention is paid to the concept of a virtual language lab. Our approach to language education assumes to encourage learners to experiment with a language and to learn by discovering patterns of grammatically correct structures created and managed by a language expert.
Keywords: Computer-assisted instruction, Language learning, Natural language grammar models, HCI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21953766 The use of a Bespoke Computer Game For Teaching Analogue Electronics
Authors: Olaf Hallan Graven, Dag Andreas Hals Samuelsen
Abstract:
An implementation of a design for a game based virtual learning environment is described. The game is developed for a course in analogue electronics, and the topic is the design of a power supply. This task can be solved in a number of different ways, with certain constraints, giving the students a certain amount of freedom, although the game is designed not to facilitate trial-and error approach. The use of storytelling and a virtual gaming environment provides the student with the learning material in a MMORPG environment. The game is tested on a group of second year electrical engineering students with good results.Keywords: analogue electronics, e-learning, computer games for learning, virtual reality
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15293765 Removing Ocular Artifacts from EEG Signals using Adaptive Filtering and ARMAX Modeling
Authors: Parisa Shooshtari, Gelareh Mohamadi, Behnam Molaee Ardekani, Mohammad Bagher Shamsollahi
Abstract:
EEG signal is one of the oldest measures of brain activity that has been used vastly for clinical diagnoses and biomedical researches. However, EEG signals are highly contaminated with various artifacts, both from the subject and from equipment interferences. Among these various kinds of artifacts, ocular noise is the most important one. Since many applications such as BCI require online and real-time processing of EEG signal, it is ideal if the removal of artifacts is performed in an online fashion. Recently, some methods for online ocular artifact removing have been proposed. One of these methods is ARMAX modeling of EEG signal. This method assumes that the recorded EEG signal is a combination of EOG artifacts and the background EEG. Then the background EEG is estimated via estimation of ARMAX parameters. The other recently proposed method is based on adaptive filtering. This method uses EOG signal as the reference input and subtracts EOG artifacts from recorded EEG signals. In this paper we investigate the efficiency of each method for removing of EOG artifacts. A comparison is made between these two methods. Our undertaken conclusion from this comparison is that adaptive filtering method has better results compared with the results achieved by ARMAX modeling.Keywords: Ocular Artifacts, EEG, Adaptive Filtering, ARMAX
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19033764 Educational Robotics Constructivism and Modeling of Robots using Reverse Engineering
Authors: David G. Maxínez, A. Ferreyra Ramírez, Ismael Echenique Álvarez, Francisco Javier Sánchez Rangel, Guillermo Castillo Tapia, Petra Baldivia Noyola, María Antonieta García Galván
Abstract:
The project describes the modeling of various architectures mechatronics specifically morphologies of robots in an educational environment. Each structure developed by students of pre-school, primary and secondary was created using the concept of reverse engineering in a constructivist environment, to later be integrated in educational software that promotes the teaching of educational Robotics in a virtual and economic environment.Keywords: Modeling, constructivist, engineering, reverse, robotics education, virtual, morphology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17873763 Mathematical Reconstruction of an Object Image Using X-Ray Interferometric Fourier Holography Method
Authors: M. K. Balyan
Abstract:
The main principles of X-ray Fourier interferometric holography method are discussed. The object image is reconstructed by the mathematical method of Fourier transformation. The three methods are presented – method of approximation, iteration method and step by step method. As an example the complex amplitude transmission coefficient reconstruction of a beryllium wire is considered. The results reconstructed by three presented methods are compared. The best results are obtained by means of step by step method.
Keywords: Dynamical diffraction, hologram, object image, X-ray holography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14273762 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques
Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah
Abstract:
Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or underestimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improve accuracies. This requires standard measurement methods to be structured in ontological and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.
Keywords: BIM, Construction projects, Cost estimation, NRM, Ontology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 44443761 Impacts of E-learning in Nursing Education: In the Light of Recent Studies
Authors: A.Ö. İlkay, C.O. Zeynep
Abstract:
Information and Communication Technologies (ICT) has changed our life and learn. ICT bares doors to new innovative methods to deliver education. E-learning is a part of ICT and has been endorsed as a tool for developing “21st century skills” in higher education. The aim of this review is to establish the impacts of e-learning in undergraduate nursing education. A systematic literature review was conducted to assess the impacts of e-learning in nursing education by using Akdeniz University electronic databases. According to results, we can decelerate that the nursing faculties cannot treat e-learning methods as a single tool. E-learning should be used with a good understanding of learners’ needs.
Keywords: E-learning, nursing education, systematic literature review.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46323760 Case Studies in Three Domains of Learning: Cognitive, Affective, Psychomotor
Authors: Zeinabsadat Haghshenas
Abstract:
Bloom’s Taxonomy has been changed during the years. The idea of this writing is about the revision that has happened in both facts and terms. It also contains case studies of using cognitive Bloom’s taxonomy in teaching geometric solids to the secondary school students, affective objectives in a creative workshop for adults and psychomotor objectives in fixing a malfunctioned refrigerator lamp. There is also pointed to the important role of classification objectives in adult education as a way to prevent memory loss.Keywords: Adult education, affective domain, cognitive domain, memory loss, psychomotor domain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71883759 Attribute Weighted Class Complexity: A New Metric for Measuring Cognitive Complexity of OO Systems
Authors: Dr. L. Arockiam, A. Aloysius
Abstract:
In general, class complexity is measured based on any one of these factors such as Line of Codes (LOC), Functional points (FP), Number of Methods (NOM), Number of Attributes (NOA) and so on. There are several new techniques, methods and metrics with the different factors that are to be developed by the researchers for calculating the complexity of the class in Object Oriented (OO) software. Earlier, Arockiam et.al has proposed a new complexity measure namely Extended Weighted Class Complexity (EWCC) which is an extension of Weighted Class Complexity which is proposed by Mishra et.al. EWCC is the sum of cognitive weights of attributes and methods of the class and that of the classes derived. In EWCC, a cognitive weight of each attribute is considered to be 1. The main problem in EWCC metric is that, every attribute holds the same value but in general, cognitive load in understanding the different types of attributes cannot be the same. So here, we are proposing a new metric namely Attribute Weighted Class Complexity (AWCC). In AWCC, the cognitive weights have to be assigned for the attributes which are derived from the effort needed to understand their data types. The proposed metric has been proved to be a better measure of complexity of class with attributes through the case studies and experimentsKeywords: Software Complexity, Attribute Weighted Class Complexity, Weighted Class Complexity, Data Type
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21223758 Advanced Neural Network Learning Applied to Pulping Modeling
Authors: Z. Zainuddin, W. D. Wan Rosli, R. Lanouette, S. Sathasivam
Abstract:
This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Keywords: Convergence, pulping modeling, neural networks, preconditioned conjugate gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14093757 Evolution of Performance Measurement Methods in Conditions of Uncertainty: The Implementation of Fuzzy Sets in Performance Measurement
Authors: E. A. Tkachenko, E. M. Rogova, V. V. Klimov
Abstract:
One of the basic issues of development management is connected with performance measurement as a prerequisite for identifying the achievement of development objectives. The aim of our research is to develop an improved model of assessing a company’s development results. The model should take into account the cyclical nature of development and the high degree of uncertainty in dealing with numerous management tasks. Our hypotheses may be formulated as follows: Hypothesis 1. The cycle of a company’s development may be studied from the standpoint of a project cycle. To do that, methods and tools of project analysis are to be used. Hypothesis 2. The problem of the uncertainty when justifying managerial decisions within the framework of a company’s development cycle can be solved through the use of the mathematical apparatus of fuzzy logic. The reasoned justification of the validity of the hypotheses made is given in the suggested article. The fuzzy logic toolkit applies to the case of technology shift within an enterprise. It is proven that some restrictions in performance measurement that are incurred to conventional methods could be eliminated by implementation of the fuzzy logic apparatus in performance measurement models.
Keywords: Fuzzy logic, fuzzy sets, performance measurement, project analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10783756 Integrated Grey Rational Analysis-Standard Deviation Method for Handover in Heterogeneous Networks
Authors: Mohanad Alhabo, Naveed Nawaz, Mahmoud Al-Faris
Abstract:
The dense deployment of small cells is a promising solution to enhance the coverage and capacity of the heterogeneous networks (HetNets). However, the unplanned deployment could bring new challenges to the network ranging from interference, unnecessary handovers and handover failures. This will cause a degradation in the quality of service (QoS) delivered to the end user. In this paper, we propose an integrated Grey Rational Analysis Standard Deviation based handover method (GRA-SD) for HetNet. The proposed method integrates the Standard Deviation (SD) technique to acquire the weight of the handover metrics and the GRA method to select the best handover base station. The performance of the GRA-SD method is evaluated and compared with the traditional Multiple Attribute Decision Making (MADM) methods including Simple Additive Weighting (SAW) and VIKOR methods. Results reveal that the proposed method has outperformed the other methods in terms of minimizing the number of frequent unnecessary handovers and handover failures, in addition to improving the energy efficiency.Keywords: Energy efficiency, handover, HetNets, MADM, small cells.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4983755 Numerical Modelling of Crack Initiation around a Wellbore Due to Explosion
Authors: Meysam Lak, Mohammad Fatehi Marji, Alireza Yarahamdi Bafghi, Abolfazl Abdollahipour
Abstract:
A wellbore is a hole that is drilled to aid in the exploration and recovery of natural resources including oil and gas. Occasionally, in order to increase productivity index and porosity of the wellbore and reservoir, the well stimulation methods have been used. Hydraulic fracturing is one of these methods. Moreover, several explosions at the end of the well can stimulate the reservoir and create fractures around it. In this study, crack initiation in rock around the wellbore has been numerically modeled due to explosion. One, two, three, and four pairs of explosion have been set at the end of the wellbore on its wall. After each stage of the explosion, results have been presented and discussed. Results show that this method can initiate and probably propagate several fractures around the wellbore.
Keywords: Crack initiation, explosion, finite difference modelling, well productivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8113754 Promoting Reflection through Action Learning in a 3D Virtual World
Authors: R.L. Sanders, L. McKeown
Abstract:
An international cooperation between educators in Australia and the US has led to a reconceptualization of the teaching of a library science course at Appalachian State University. The pedagogy of Action Learning coupled with a 3D virtual learning environment immerses students in a social constructivist learning space that incorporates and supports interaction and reflection. The intent of this study was to build a bridge between theory and practice by providing students with a tool set that promoted personal and social reflection, and created and scaffolded a community of practice. Besides, action learning is an educational process whereby the fifty graduate students experienced their own actions and experience to improve performance.Keywords: action learning, action research, reflection, metacognition, virtual worlds
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14253753 Comparative Study of QRS Complex Detection in ECG
Authors: Ibtihel Nouira, Asma Ben Abdallah, Ibtissem Kouaja, Mohamed Hèdi Bedoui
Abstract:
The processing of the electrocardiogram (ECG) signal consists essentially in the detection of the characteristic points of signal which are an important tool in the diagnosis of heart diseases. The most suitable are the detection of R waves. In this paper, we present various mathematical tools used for filtering ECG using digital filtering and Discreet Wavelet Transform (DWT) filtering. In addition, this paper will include two main R peak detection methods by applying a windowing process: The first method is based on calculations derived, the second is a time-frequency method based on Dyadic Wavelet Transform DyWT.Keywords: Derived calculation methods, Electrocardiogram, R peaks, Wavelet Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25703752 Application Reliability Method for Concrete Dams
Authors: Mustapha Kamel Mihoubi, Mohamed Essadik Kerkar
Abstract:
Probabilistic risk analysis models are used to provide a better understanding of the reliability and structural failure of works, including when calculating the stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of reliability analysis methods including the methods used in engineering. It is in our case, the use of level 2 methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type first order risk method (FORM) and the second order risk method (SORM). By way of comparison, a level three method was used which generates a full analysis of the problem and involves an integration of the probability density function of random variables extended to the field of security using the Monte Carlo simulation method. Taking into account the change in stress following load combinations: normal, exceptional and extreme acting on the dam, calculation of the results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities, thus causing a significant decrease in strength, shear forces then induce a shift that threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case the increase of uplift in a hypothetical default of the drainage system.
Keywords: Dam, failure, limit-state, Monte Carlo simulation, reliability, probability, simulation, sliding, Taylor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12263751 Determination of Measurement Uncertainty in Extracting of Forming Limit Diagrams
Authors: M. Mahboubkhah, H. Fayazfar
Abstract:
In this research, Forming Limit Diagrams for supertension sheet metals which are using in automobile industry have been obtained. The exerted strains to sheet metals have been measured with four different methods and the errors of each method have also been represented. These methods have been compared with together and the most efficient and economic way of extracting of the exerted strains to sheet metals has been introduced. In this paper total error and uncertainty of FLD extraction procedures have been derived. Determination of the measurement uncertainty in extracting of FLD has a great importance in design and analysis of the sheet metal forming process.Keywords: Forming Limit Diagram, Major and Minor Strain, Measurement Uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20033750 Estimation of Real Power Transfer Allocation Using Intelligent Systems
Authors: H. Shareef, A. Mohamed, S. A. Khalid, Aziah Khamis
Abstract:
This paper presents application artificial intelligent (AI) techniques, namely artificial neural network (ANN), adaptive neuro fuzzy interface system (ANFIS), to estimate the real power transfer between generators and loads. Since these AI techniques adopt supervised learning, it first uses modified nodal equation method (MNE) to determine real power contribution from each generator to loads. Then the results of MNE method and load flow information are utilized to estimate the power transfer using AI techniques. The 25-bus equivalent system of south Malaysia is utilized as a test system to illustrate the effectiveness of both AI methods compared to that of the MNE method. The mean squared error of the estimate of ANN and ANFIS power transfer allocation methods are 1.19E-05 and 2.97E-05, respectively. Furthermore, when compared to MNE method, ANN and ANFIS methods computes generator contribution to loads within 20.99 and 39.37msec respectively whereas the MNE method took 360msec for the calculation of same real power transfer allocation.
Keywords: Artificial intelligence, Power tracing, Artificial neural network, ANFIS, Power system deregulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25843749 A Comparison of Some Thresholding Selection Methods for Wavelet Regression
Authors: Alsaidi M. Altaher, Mohd T. Ismail
Abstract:
In wavelet regression, choosing threshold value is a crucial issue. A too large value cuts too many coefficients resulting in over smoothing. Conversely, a too small threshold value allows many coefficients to be included in reconstruction, giving a wiggly estimate which result in under smoothing. However, the proper choice of threshold can be considered as a careful balance of these principles. This paper gives a very brief introduction to some thresholding selection methods. These methods include: Universal, Sure, Ebays, Two fold cross validation and level dependent cross validation. A simulation study on a variety of sample sizes, test functions, signal-to-noise ratios is conducted to compare their numerical performances using three different noise structures. For Gaussian noise, EBayes outperforms in all cases for all used functions while Two fold cross validation provides the best results in the case of long tail noise. For large values of signal-to-noise ratios, level dependent cross validation works well under correlated noises case. As expected, increasing both sample size and level of signal to noise ratio, increases estimation efficiency.
Keywords: wavelet regression, simulation, Threshold.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17673748 Making Data Structures and Algorithms more Understandable by Programming Sudoku the Human Way
Authors: Roelien Goede
Abstract:
Data Structures and Algorithms is a module in most Computer Science or Information Technology curricula. It is one of the modules most students identify as being difficult. This paper demonstrates how programming a solution for Sudoku can make abstract concepts more concrete. The paper relates concepts of a typical Data Structures and Algorithms module to a step by step solution for Sudoku in a human type as opposed to a computer oriented solution.Keywords: Data Structures, Algorithms, Sudoku, ObjectOriented Programming, Programming Teaching, Education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30973747 An Estimating Parameter of the Mean in Normal Distribution by Maximum Likelihood, Bayes, and Markov Chain Monte Carlo Methods
Authors: Autcha Araveeporn
Abstract:
This paper is to compare the parameter estimation of the mean in normal distribution by Maximum Likelihood (ML), Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML estimator is estimated by the average of data, the Bayes method is considered from the prior distribution to estimate Bayes estimator, and MCMC estimator is approximated by Gibbs sampling from posterior distribution. These methods are also to estimate a parameter then the hypothesis testing is used to check a robustness of the estimators. Data are simulated from normal distribution with the true parameter of mean 2, and variance 4, 9, and 16 when the sample sizes is set as 10, 20, 30, and 50. From the results, it can be seen that the estimation of MLE, and MCMC are perceivably different from the true parameter when the sample size is 10 and 20 with variance 16. Furthermore, the Bayes estimator is estimated from the prior distribution when mean is 1, and variance is 12 which showed the significant difference in mean with variance 9 at the sample size 10 and 20.
Keywords: Bayes method, Markov Chain Monte Carlo method, Maximum Likelihood method, normal distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14363746 The Role of Creative Thinking in Science Education
Authors: Jindriska Svobodova, Jan Novotny
Abstract:
A teacher’s attitude to creativity plays an essential role in the thinking development of his/her students. The purpose of this study is to understand if a science teacher's personal creativity can modify his/her ability to produce various kinds of questions. This research used an education activity based on cosmic sketches and pictures by K.E. Tsiolkovsky, the founder of astronautics, to explore if any relationship between individual creativity and the asking questions skill exists. As a screening instrument, which allows an assessment of the respondent's creative potential, a common test of creative thinking was used. The results of the creativity test and the diversity of the questions are mentioned.
Keywords: Science education, active learning, physics teaching, creativity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18263745 Increased Capacity of Information Hiding in LSB-s Method for Text and Image
Authors: H.B.Kekre, Archana Athawale, Pallavi N.Halarnkar
Abstract:
Steganography, derived from Greek, literally means “covered writing". It includes a vast array of secret communications methods that conceal the message-s very existence. These methods include invisible inks, microdots, character arrangement, digital signatures, covert channels, and spread spectrum communications. This paper proposes a new improved version of Least Significant Bit (LSB) method. The approach proposed is simple for implementation when compared to Pixel value Differencing (PVD) method and yet achieves a High embedding capacity and imperceptibility. The proposed method can also be applied to 24 bit color images and achieve embedding capacity much higher than PVD.Keywords: Information Hiding, LSB Matching, PVD Steganography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31663744 Evaluation of Traditional Methods in Construction and Their Effects on Reinforced-Concrete Buildings Behavior
Authors: E. H. N. Gashti, M. Zarrini, M. Irannezhad, J. R. Langroudi
Abstract:
Using ETABS software, this study analyzed 23 buildings to evaluate effects of mistakes during construction phase on buildings structural behavior. For modelling, two different loadings were assumed: 1) design loading and 2) loading due to the effects of mistakes in construction phase. Research results determined that considering traditional construction methods for buildings resulted in a significant increase in dead loads and consequently intensified the displacements and base-shears of buildings under seismic loads.
Keywords: Reinforced-concrete buildings, Construction mistakes, Base-shear, displacements, Failure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26633743 Parallel Image Compression and Analysis with Wavelets
Authors: M. Kutila, J. Viitanen
Abstract:
This paper presents image compression with wavelet based method. The wavelet transformation divides image to low- and high pass filtered parts. The traditional JPEG compression technique requires lower computation power with feasible losses, when only compression is needed. However, there is obvious need for wavelet based methods in certain circumstances. The methods are intended to the applications in which the image analyzing is done parallel with compression. Furthermore, high frequency bands can be used to detect changes or edges. Wavelets enable hierarchical analysis for low pass filtered sub-images. The first analysis can be done for a small image, and only if any interesting is found, the whole image is processed or reconstructed.
Keywords: image compression, jpeg, wavelet, vlc
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17853742 Methods and Algorithms of Ensuring Data Privacy in AI-Based Healthcare Systems and Technologies
Authors: Omar Farshad Jeelani, Makaire Njie, Viktoriia M. Korzhuk
Abstract:
Recently, the application of AI-powered algorithms in healthcare continues to flourish. Particularly, access to healthcare information, including patient health history, diagnostic data, and PII (Personally Identifiable Information) is paramount in the delivery of efficient patient outcomes. However, as the exchange of healthcare information between patients and healthcare providers through AI-powered solutions increases, protecting a person’s information and their privacy has become even more important. Arguably, the increased adoption of healthcare AI has resulted in a significant concentration on the security risks and protection measures to the security and privacy of healthcare data, leading to escalated analyses and enforcement. Since these challenges are brought by the use of AI-based healthcare solutions to manage healthcare data, AI-based data protection measures are used to resolve the underlying problems. Consequently, these projects propose AI-powered safeguards and policies/laws to protect the privacy of healthcare data. The project present the best-in-school techniques used to preserve data privacy of AI-powered healthcare applications. Popular privacy-protecting methods like Federated learning, cryptography techniques, differential privacy methods, and hybrid methods are discussed together with potential cyber threats, data security concerns, and prospects. Also, the project discusses some of the relevant data security acts/laws that govern the collection, storage, and processing of healthcare data to guarantee owners’ privacy is preserved. This inquiry discusses various gaps and uncertainties associated with healthcare AI data collection procedures, and identifies potential correction/mitigation measures.
Keywords: Data privacy, artificial intelligence, healthcare AI, data sharing, healthcare organizations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1203741 Introduction to Electron Spectroscopy for Surfaces Characterization
Authors: Abdelkader Benzian
Abstract:
Spectroscopy is the study of the spectrum produced by the radiation-matter interaction which requires the study of electromagnetic radiation (or electrons) emitted, absorbed, or scattered by matter. Thus, the spectral analysis is using spectrometers which enables us to obtain curves that express the distribution of the energy emitted (spectrum). Analysis of emission spectra can therefore constitute several methods depending on the range of radiation energy. The most common methods used are Auger electron spectroscopy (AES) and Electron Energy Losses Spectroscopy (EELS), which allow the determination of the atomic structure on the surface. This paper focalized essentially on the Electron Energy Loss Spectroscopy.
Keywords: Dielectric, plasmon, mean free path, spectroscopy of electron energy losses.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7723740 Initialization Method of Reference Vectors for Improvement of Recognition Accuracy in LVQ
Authors: Yuji Mizuno, Hiroshi Mabuchi
Abstract:
Initial values of reference vectors have significant influence on recognition accuracy in LVQ. There are several existing techniques, such as SOM and k-means, for setting initial values of reference vectors, each of which has provided some positive results. However, those results are not sufficient for the improvement of recognition accuracy. This study proposes an ACO-used method for initializing reference vectors with an aim to achieve recognition accuracy higher than those obtained through conventional methods. Moreover, we will demonstrate the effectiveness of the proposed method by applying it to the wine data and English vowel data and comparing its results with those of conventional methods.
Keywords: Clustering, LVQ, ACO, SOM, k-means.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1256