Search results for: random subspace-based feature evaluation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9608

Search results for: random subspace-based feature evaluation

9278 Exploring Students’ Voices in Lecturers’ Teaching and Learning Developmental Trajectory

Authors: Khashane Stephen Malatji, Makwalete Johanna Malatji

Abstract:

Student evaluation of teaching (SET) is the common way of assessing teaching quality at universities and tracing the professional growth of lecturers. The aim of this study was to investigate the role played by student evaluation in the teaching and learning agenda at one South African University. The researchers used a qualitative approach and a case study research design. With regards to data collection, document analysis was used. Evaluation reports were reviewed to monitor the growth of lecturers who were evaluated during the academic years 2020 and 2021 in one faculty. The results of the study reveal that student evaluation remains the most relevant tool to inform the teaching agenda at a university. Lecturers who were evaluated were found to grow academically. All lecturers evaluated during 2020 have shown great improvement when evaluated repeatedly during 2021. Therefore, it can be concluded that student evaluation helps to improve the pedagogical and professional proficiency of lecturers. The study therefore, recommends that lecturers conduct an evaluation for each module they teach every semester or annually in case of year modules. The study also recommends that lecturers attend to all areas that draw negative comments from students in order to improve.

Keywords: students’ voices, teaching agenda, evaluation, feedback, responses

Procedia PDF Downloads 86
9277 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 294
9276 Pilot-free Image Transmission System of Joint Source Channel Based on Multi-Level Semantic Information

Authors: Linyu Wang, Liguo Qiao, Jianhong Xiang, Hao Xu

Abstract:

In semantic communication, the existing joint Source Channel coding (JSCC) wireless communication system without pilot has unstable transmission performance and can not effectively capture the global information and location information of images. In this paper, a pilot-free image transmission system of joint source channel based on multi-level semantic information (Multi-level JSCC) is proposed. The transmitter of the system is composed of two networks. The feature extraction network is used to extract the high-level semantic features of the image, compress the information transmitted by the image, and improve the bandwidth utilization. Feature retention network is used to preserve low-level semantic features and image details to improve communication quality. The receiver also is composed of two networks. The received high-level semantic features are fused with the low-level semantic features after feature enhancement network in the same dimension, and then the image dimension is restored through feature recovery network, and the image location information is effectively used for image reconstruction. This paper verifies that the proposed multi-level JSCC algorithm can effectively transmit and recover image information in both AWGN channel and Rayleigh fading channel, and the peak signal-to-noise ratio (PSNR) is improved by 1~2dB compared with other algorithms under the same simulation conditions.

Keywords: deep learning, JSCC, pilot-free picture transmission, multilevel semantic information, robustness

Procedia PDF Downloads 114
9275 Seismic Performance Evaluation of Existing Building Using Structural Information Modeling

Authors: Byungmin Cho, Dongchul Lee, Taejin Kim, Minhee Lee

Abstract:

The procedure for the seismic retrofit of existing buildings includes the seismic evaluation. In the evaluation step, it is assessed whether the buildings have satisfactory performance against seismic load. Based on the results of that, the buildings are upgraded. To evaluate seismic performance of the buildings, it usually goes through the model transformation from elastic analysis to inelastic analysis. However, when the data is not delivered through the interwork, engineers should manually input the data. In this process, since it leads to inaccuracy and loss of information, the results of the analysis become less accurate. Therefore, in this study, the process for the seismic evaluation of existing buildings using structural information modeling is suggested. This structural information modeling makes the work economic and accurate. To this end, it is determined which part of the process could be computerized through the investigation of the process for the seismic evaluation based on ASCE 41. The structural information modeling process is developed to apply to the seismic evaluation using Perform 3D program usually used for the nonlinear response history analysis. To validate this process, the seismic performance of an existing building is investigated.

Keywords: existing building, nonlinear analysis, seismic performance, structural information modeling

Procedia PDF Downloads 380
9274 Multi-Class Text Classification Using Ensembles of Classifiers

Authors: Syed Basit Ali Shah Bukhari, Yan Qiang, Saad Abdul Rauf, Syed Saqlaina Bukhari

Abstract:

Text Classification is the methodology to classify any given text into the respective category from a given set of categories. It is highly important and vital to use proper set of pre-processing , feature selection and classification techniques to achieve this purpose. In this paper we have used different ensemble techniques along with variance in feature selection parameters to see the change in overall accuracy of the result and also on some other individual class based features which include precision value of each individual category of the text. After subjecting our data through pre-processing and feature selection techniques , different individual classifiers were tested first and after that classifiers were combined to form ensembles to increase their accuracy. Later we also studied the impact of decreasing the classification categories on over all accuracy of data. Text classification is highly used in sentiment analysis on social media sites such as twitter for realizing people’s opinions about any cause or it is also used to analyze customer’s reviews about certain products or services. Opinion mining is a vital task in data mining and text categorization is a back-bone to opinion mining.

Keywords: Natural Language Processing, Ensemble Classifier, Bagging Classifier, AdaBoost

Procedia PDF Downloads 227
9273 Using Predictive Analytics to Identify First-Year Engineering Students at Risk of Failing

Authors: Beng Yew Low, Cher Liang Cha, Cheng Yong Teoh

Abstract:

Due to a lack of continual assessment or grade related data, identifying first-year engineering students in a polytechnic education at risk of failing is challenging. Our experience over the years tells us that there is no strong correlation between having good entry grades in Mathematics and the Sciences and excelling in hardcore engineering subjects. Hence, identifying students at risk of failure cannot be on the basis of entry grades in Mathematics and the Sciences alone. These factors compound the difficulty of early identification and intervention. This paper describes the development of a predictive analytics model in the early detection of students at risk of failing and evaluates its effectiveness. Data from continual assessments conducted in term one, supplemented by data of student psychological profiles such as interests and study habits, were used. Three classification techniques, namely Logistic Regression, K Nearest Neighbour, and Random Forest, were used in our predictive model. Based on our findings, Random Forest was determined to be the strongest predictor with an Area Under the Curve (AUC) value of 0.994. Correspondingly, the Accuracy, Precision, Recall, and F-Score were also highest among these three classifiers. Using this Random Forest Classification technique, students at risk of failure could be identified at the end of term one. They could then be assigned to a Learning Support Programme at the beginning of term two. This paper gathers the results of our findings. It also proposes further improvements that can be made to the model.

Keywords: continual assessment, predictive analytics, random forest, student psychological profile

Procedia PDF Downloads 128
9272 The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition

Authors: Fawaz S. Al-Anzi, Dia AbuZeina

Abstract:

Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit.

Keywords: speech recognition, acoustic features, mel frequency, cepstral coefficients

Procedia PDF Downloads 254
9271 Solving Weighted Number of Operation Plus Processing Time Due-Date Assignment, Weighted Scheduling and Process Planning Integration Problem Using Genetic and Simulated Annealing Search Methods

Authors: Halil Ibrahim Demir, Caner Erden, Mumtaz Ipek, Ozer Uygun

Abstract:

Traditionally, the three important manufacturing functions, which are process planning, scheduling and due-date assignment, are performed separately and sequentially. For couple of decades, hundreds of studies are done on integrated process planning and scheduling problems and numerous researches are performed on scheduling with due date assignment problem, but unfortunately the integration of these three important functions are not adequately addressed. Here, the integration of these three important functions is studied by using genetic, random-genetic hybrid, simulated annealing, random-simulated annealing hybrid and random search techniques. As well, the importance of the integration of these three functions and the power of meta-heuristics and of hybrid heuristics are studied.

Keywords: process planning, weighted scheduling, weighted due-date assignment, genetic search, simulated annealing, hybrid meta-heuristics

Procedia PDF Downloads 465
9270 Learning Dynamic Representations of Nodes in Temporally Variant Graphs

Authors: Sandra Mitrovic, Gaurav Singh

Abstract:

In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.

Keywords: churn prediction, dynamic networks, node2vec, auto-encoders

Procedia PDF Downloads 310
9269 Students' Online Evaluation: Impact on the Polytechnic University of the Philippines Faculty's Performance

Authors: Silvia C. Ambag, Racidon P. Bernarte, Jacquelyn B. Buccahi, Jessica R. Lacaron, Charlyn L. Mangulabnan

Abstract:

This study aimed to answer the query, “What is the impact of Students Online Evaluation on PUP Faculty’s Performance?” The problem of the study was resolve through the objective of knowing the perceived impact of students’ online evaluation on PUP faculty’s performance. The objectives were carried through the application of quantitative research design and by conducting survey research method. The researchers utilized primary and secondary data. Primary data was gathered from the self-administered survey and secondary data was collected from the books, articles on both print-out and online materials and also other theses related study. Findings revealed that PUP faculty in general stated that students’ online evaluation made a highly positive impact on their performance based on their ‘Knowledge of Subject’ and ‘Teaching for Independent Learning’, giving a highest mean of 3.62 and 3.60 respectively., followed by the faculty’s performance which gained an overall means of 3.55 and 3.53 are based on their ‘Commitment’ and ‘Management of Learning’. From the findings, the researchers concluded that Students’ online evaluation made a ‘Highly Positive’ impact on PUP faculty’s performance based on all Four (4) areas. Furthermore, the study’s findings reveal that PUP faculty encountered many problems regarding the students’ online evaluation; the impact of the Students’ Online Evaluation is significant when it comes to the employment status of the faculty; and most of the PUP faculty recommends reviewing the PUP Online Survey for Faculty Evaluation for improvement. Hence, the researchers recommend the PUP Administration to revisit and revise the PUP Online Survey for Faculty Evaluation, specifically review the questions and make a set of questions that will be appropriate to the discipline or field of the faculty. Also, the administration should fully orient the students about the importance, purpose and impact of online faculty evaluation. And lastly, the researchers suggest the PUP Faculty to continue their positive performance and continue on being cooperative with the administrations’ purpose of addressing the students’ concerns and for the students, the researchers urged them to take the online faculty evaluation honestly and objectively.

Keywords: on-line Evaluation, faculty, performance, Polytechnic University of the Philippines (PUP)

Procedia PDF Downloads 403
9268 Image Analysis for Obturator Foramen Based on Marker-controlled Watershed Segmentation and Zernike Moments

Authors: Seda Sahin, Emin Akata

Abstract:

Obturator foramen is a specific structure in pelvic bone images and recognition of it is a new concept in medical image processing. Moreover, segmentation of bone structures such as obturator foramen plays an essential role for clinical research in orthopedics. In this paper, we present a novel method to analyze the similarity between the substructures of the imaged region and a hand drawn template, on hip radiographs to detect obturator foramen accurately with integrated usage of Marker-controlled Watershed segmentation and Zernike moment feature descriptor. Marker-controlled Watershed segmentation is applied to seperate obturator foramen from the background effectively. Zernike moment feature descriptor is used to provide matching between binary template image and the segmented binary image for obturator foramens for final extraction. The proposed method is tested on randomly selected 100 hip radiographs. The experimental results represent that our method is able to segment obturator foramens with % 96 accuracy.

Keywords: medical image analysis, segmentation of bone structures on hip radiographs, marker-controlled watershed segmentation, zernike moment feature descriptor

Procedia PDF Downloads 429
9267 Application of Machine Learning Techniques in Forest Cover-Type Prediction

Authors: Saba Ebrahimi, Hedieh Ashrafi

Abstract:

Predicting the cover type of forests is a challenge for natural resource managers. In this project, we aim to perform a comprehensive comparative study of two well-known classification methods, support vector machine (SVM) and decision tree (DT). The comparison is first performed among different types of each classifier, and then the best of each classifier will be compared by considering different evaluation metrics. The effect of boosting and bagging for decision trees is also explored. Furthermore, the effect of principal component analysis (PCA) and feature selection is also investigated. During the project, the forest cover-type dataset from the remote sensing and GIS program is used in all computations.

Keywords: classification methods, support vector machine, decision tree, forest cover-type dataset

Procedia PDF Downloads 209
9266 Evaluation of a Hybrid Knowledge-Based System Using Fuzzy Approach

Authors: Kamalendu Pal

Abstract:

This paper describes the main features of a knowledge-based system evaluation method. System evaluation is placed in the context of a hybrid legal decision-support system, Advisory Support for Home Settlement in Divorce (ASHSD). Legal knowledge for ASHSD is represented in two forms, as rules and previously decided cases. Besides distinguishing the two different forms of knowledge representation, the paper outlines the actual use of these forms in a computational framework that is designed to generate a plausible solution for a given case, by using rule-based reasoning (RBR) and case-based reasoning (CBR) in an integrated environment. The nature of suitability assessment of a solution has been considered as a multiple criteria decision making process in ASHAD evaluation. The evaluation was performed by a combination of discussions and questionnaires with different user groups. The answers to questionnaires used in this evaluations method have been measured as a combination of linguistic variables, fuzzy numbers, and by using defuzzification process. The results show that the designed evaluation method creates suitable mechanism in order to improve the performance of the knowledge-based system.

Keywords: case-based reasoning, fuzzy number, legal decision-support system, linguistic variable, rule-based reasoning, system evaluation

Procedia PDF Downloads 364
9265 Seismic Response Mitigation of Structures Using Base Isolation System Considering Uncertain Parameters

Authors: Rama Debbarma

Abstract:

The present study deals with the performance of Linear base isolation system to mitigate seismic response of structures characterized by random system parameters. This involves optimization of the tuning ratio and damping properties of the base isolation system considering uncertain system parameters. However, the efficiency of base isolator may reduce if it is not tuned to the vibrating mode it is designed to suppress due to unavoidable presence of system parameters uncertainty. With the aid of matrix perturbation theory and first order Taylor series expansion, the total probability concept is used to evaluate the unconditional response of the primary structures considering random system parameters. For this, the conditional second order information of the response quantities are obtained in random vibration framework using state space formulation. Subsequently, the maximum unconditional root mean square displacement of the primary structures is used as the objective function to obtain optimum damping parameters Numerical study is performed to elucidate the effect of parameters uncertainties on the optimization of parameters of linear base isolator and system performance.

Keywords: linear base isolator, earthquake, optimization, uncertain parameters

Procedia PDF Downloads 427
9264 Fuzzy Population-Based Meta-Heuristic Approaches for Attribute Reduction in Rough Set Theory

Authors: Mafarja Majdi, Salwani Abdullah, Najmeh S. Jaddi

Abstract:

One of the global combinatorial optimization problems in machine learning is feature selection. It concerned with removing the irrelevant, noisy, and redundant data, along with keeping the original meaning of the original data. Attribute reduction in rough set theory is an important feature selection method. Since attribute reduction is an NP-hard problem, it is necessary to investigate fast and effective approximate algorithms. In this paper, we proposed two feature selection mechanisms based on memetic algorithms (MAs) which combine the genetic algorithm with a fuzzy record to record travel algorithm and a fuzzy controlled great deluge algorithm to identify a good balance between local search and genetic search. In order to verify the proposed approaches, numerical experiments are carried out on thirteen datasets. The results show that the MAs approaches are efficient in solving attribute reduction problems when compared with other meta-heuristic approaches.

Keywords: rough set theory, attribute reduction, fuzzy logic, memetic algorithms, record to record algorithm, great deluge algorithm

Procedia PDF Downloads 449
9263 Machine Learning-Driven Prediction of Cardiovascular Diseases: A Supervised Approach

Authors: Thota Sai Prakash, B. Yaswanth, Jhade Bhuvaneswar, Marreddy Divakar Reddy, Shyam Ji Gupta

Abstract:

Across the globe, there are a lot of chronic diseases, and heart disease stands out as one of the most perilous. Sadly, many lives are lost to this condition, even though early intervention could prevent such tragedies. However, identifying heart disease in its initial stages is not easy. To address this challenge, we propose an automated system aimed at predicting the presence of heart disease using advanced techniques. By doing so, we hope to empower individuals with the knowledge needed to take proactive measures against this potentially fatal illness. Our approach towards this problem involves meticulous data preprocessing and the development of predictive models utilizing classification algorithms such as Support Vector Machines (SVM), Decision Tree, and Random Forest. We assess the efficiency of every model based on metrics like accuracy, ensuring that we select the most reliable option. Additionally, we conduct thorough data analysis to reveal the importance of different attributes. Among the models considered, Random Forest emerges as the standout performer with an accuracy rate of 96.04% in our study.

Keywords: support vector machines, decision tree, random forest

Procedia PDF Downloads 32
9262 Performance Comparison of Cooperative Banks in the EU, USA and Canada

Authors: Matěj Kuc

Abstract:

This paper compares different types of profitability measures of cooperative banks from two developed regions: the European Union and the United States of America together with Canada. We created balanced dataset of more than 200 cooperative banks covering 2011-2016 period. We made series of tests and run Random Effects estimation on panel data. We found that American and Canadian cooperatives are more profitable in terms of return on assets (ROA) and return on equity (ROE). There is no significant difference in net interest margin (NIM). Our results show that the North American cooperative banks accommodated better to the current market environment.

Keywords: cooperative banking, panel data, profitability measures, random effects

Procedia PDF Downloads 111
9261 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier

Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh

Abstract:

This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.

Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems

Procedia PDF Downloads 36
9260 Using LMS as an E-Learning Platform in Higher Education

Authors: Mohammed Alhawiti

Abstract:

Assessment of Learning Management Systems has been of less importance than its due share. This paper investigates the evaluation of learning management systems (LMS) within educational setting as both an online learning system as well as a helpful tool for multidisciplinary learning environment. This study suggests a theoretical e-learning evaluation model, studying a multi-dimensional methods for evaluation through LMS system, service and content quality, learner`s perspective and attitudes of the instructor. A survey was conducted among 105 e-learners. The sample consisted of students at both undergraduate and master’s levels. Content validity, reliability were tested through the instrument, Findings suggested the suitability of the proposed model in evaluation for the satisfaction of learners through LMS. The results of this study would be valuable for both instructors and users of e-learning systems.

Keywords: e-learning, LMS, higher education, management systems

Procedia PDF Downloads 403
9259 A Sequential Approach for Random-Effects Meta-Analysis

Authors: Samson Henry Dogo, Allan Clark, Elena Kulinskaya

Abstract:

The objective in meta-analysis is to combine results from several independent studies in order to create generalization and provide evidence based for decision making. But recent studies show that the magnitude of effect size estimates reported in many areas of research finding changed with year publication and this can impair the results and conclusions of meta-analysis. A number of sequential methods have been proposed for monitoring the effect size estimates in meta-analysis. However they are based on statistical theory applicable to fixed effect model (FEM). For random-effects model (REM), the analysis incorporates the heterogeneity variance, tau-squared and its estimation create complications. In this paper proposed the use of Gombay and Serbian (2005) truncated CUSUM-type test with asymptotically valid critical values for sequential monitoring of REM. Simulation results show that the test does not control the Type I error well, and is not recommended. Further work required to derive an appropriate test in this important area of application.

Keywords: meta-analysis, random-effects model, sequential test, temporal changes in effect sizes

Procedia PDF Downloads 464
9258 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution

Authors: Haiyan Wu, Ying Liu, Shaoyun Shi

Abstract:

Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.

Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction

Procedia PDF Downloads 132
9257 Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation

Authors: Constantin Z. Leshan

Abstract:

Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.

Keywords: border of the Universe, causality violation, perfect isolation, quantum jumps

Procedia PDF Downloads 420
9256 A Spectral Decomposition Method for Ordinary Differential Equation Systems with Constant or Linear Right Hand Sides

Authors: R. B. Ogunrinde, C. C. Jibunoh

Abstract:

In this paper, a spectral decomposition method is developed for the direct integration of stiff and nonstiff homogeneous linear (ODE) systems with linear, constant, or zero right hand sides (RHSs). The method does not require iteration but obtains solutions at any random points of t, by direct evaluation, in the interval of integration. All the numerical solutions obtained for the class of systems coincide with the exact theoretical solutions. In particular, solutions of homogeneous linear systems, i.e. with zero RHS, conform to the exact analytical solutions of the systems in terms of t.

Keywords: spectral decomposition, linear RHS, homogeneous linear systems, eigenvalues of the Jacobian

Procedia PDF Downloads 326
9255 Multi-Stage Classification for Lung Lesion Detection on CT Scan Images Applying Medical Image Processing Technique

Authors: Behnaz Sohani, Sahand Shahalinezhad, Amir Rahmani, Aliyu Aliyu

Abstract:

Recently, medical imaging and specifically medical image processing is becoming one of the most dynamically developing areas of medical science. It has led to the emergence of new approaches in terms of the prevention, diagnosis, and treatment of various diseases. In the process of diagnosis of lung cancer, medical professionals rely on computed tomography (CT) scans, in which failure to correctly identify masses can lead to incorrect diagnosis or sampling of lung tissue. Identification and demarcation of masses in terms of detecting cancer within lung tissue are critical challenges in diagnosis. In this work, a segmentation system in image processing techniques has been applied for detection purposes. Particularly, the use and validation of a novel lung cancer detection algorithm have been presented through simulation. This has been performed employing CT images based on multilevel thresholding. The proposed technique consists of segmentation, feature extraction, and feature selection and classification. More in detail, the features with useful information are selected after featuring extraction. Eventually, the output image of lung cancer is obtained with 96.3% accuracy and 87.25%. The purpose of feature extraction applying the proposed approach is to transform the raw data into a more usable form for subsequent statistical processing. Future steps will involve employing the current feature extraction method to achieve more accurate resulting images, including further details available to machine vision systems to recognise objects in lung CT scan images.

Keywords: lung cancer detection, image segmentation, lung computed tomography (CT) images, medical image processing

Procedia PDF Downloads 93
9254 Segmentation of Arabic Handwritten Numeral Strings Based on Watershed Approach

Authors: Nidal F. Shilbayeh, Remah W. Al-Khatib, Sameer A. Nooh

Abstract:

Arabic offline handwriting recognition systems are considered as one of the most challenging topics. Arabic Handwritten Numeral Strings are used to automate systems that deal with numbers such as postal code, banking account numbers and numbers on car plates. Segmentation of connected numerals is the main bottleneck in the handwritten numeral recognition system.  This is in turn can increase the speed and efficiency of the recognition system. In this paper, we proposed algorithms for automatic segmentation and feature extraction of Arabic handwritten numeral strings based on Watershed approach. The algorithms have been designed and implemented to achieve the main goal of segmenting and extracting the string of numeral digits written by hand especially in a courtesy amount of bank checks. The segmentation algorithm partitions the string into multiple regions that can be associated with the properties of one or more criteria. The numeral extraction algorithm extracts the numeral string digits into separated individual digit. Both algorithms for segmentation and feature extraction have been tested successfully and efficiently for all types of numerals.

Keywords: handwritten numerals, segmentation, courtesy amount, feature extraction, numeral recognition

Procedia PDF Downloads 378
9253 A development of Innovator Teachers Training Curriculum to Create Instructional Innovation According to Active Learning Approach to Enhance learning Achievement of Private School in Phayao Province

Authors: Palita Sooksamran, Katcharin Mahawong

Abstract:

This research aims to offer the development of innovator teachers training curriculum to create instructional innovation according to active learning approach to enhance learning achievement. The research and development process is carried out in 3 steps: Step 1 The study of the needs necessary to develop a training curriculum: the inquiry was conducted by a sample of teachers in private schools in Phayao province that provide basic education at the level of education. Using a questionnaire of 176 people, the sample was defined using a table of random numbers and stratified samples, using the school as a random layer. Step 2 Training curriculum development: the tools used are developed training curriculum and curriculum assessments, with nine experts checking the appropriateness of the draft curriculum. The statistic used in data analysis is the average ( ) and standard deviation (S.D.) Step 3 study on effectiveness of training curriculum: one group pretest/posttest design applied in this study. The sample consisted of 35 teachers from private schools in Phayao province. The participants volunteered to attend on their own. The results of the research showed that: 1.The essential demand index needed with the list of essential needs in descending order is the choice and create of multimedia media, videos, application for learning management at the highest level ,Developed of multimedia, video and applications for learning management and selection of innovative learning management techniques and methods of solve the problem Learning , respectively. 2. The components of the training curriculum include principles, aims, scope of content, training activities, learning materials and resources, supervision evaluation. The scope of the curriculum consists of basic knowledge about learning management innovation, active learning, lesson plan design, learning materials and resources, learning measurement and evaluation, implementation of lesson plans into classroom and supervision and motoring. The results of the evaluation of quality of the draft training curriculum at the highest level. The Experts suggestion is that the purpose of the course should be used words that convey the results. 3. The effectiveness of training curriculum 1) Cognitive outcomes of the teachers in creating innovative learning management was at a high level of relative gain score. 2) The assessment results of learning management ability according to the active learning approach to enhance learning achievement by assessing from 2 education supervisor as a whole were very high , 3) Quality of innovation learning management based on active learning approach to enhance learning achievement of the teachers, 7 instructional Innovations were evaluated as outstanding works and 26 instructional Innovations passed the standard 4) Overall learning achievement of students who learned from 35 the sample teachers was at a high level of relative gain score 5) teachers' satisfaction towards the training curriculum was at the highest level.

Keywords: training curriculum, innovator teachers, active learning approach, learning achievement

Procedia PDF Downloads 45
9252 Data Clustering in Wireless Sensor Network Implemented on Self-Organization Feature Map (SOFM) Neural Network

Authors: Krishan Kumar, Mohit Mittal, Pramod Kumar

Abstract:

Wireless sensor network is one of the most promising communication networks for monitoring remote environmental areas. In this network, all the sensor nodes are communicated with each other via radio signals. The sensor nodes have capability of sensing, data storage and processing. The sensor nodes collect the information through neighboring nodes to particular node. The data collection and processing is done by data aggregation techniques. For the data aggregation in sensor network, clustering technique is implemented in the sensor network by implementing self-organizing feature map (SOFM) neural network. Some of the sensor nodes are selected as cluster head nodes. The information aggregated to cluster head nodes from non-cluster head nodes and then this information is transferred to base station (or sink nodes). The aim of this paper is to manage the huge amount of data with the help of SOM neural network. Clustered data is selected to transfer to base station instead of whole information aggregated at cluster head nodes. This reduces the battery consumption over the huge data management. The network lifetime is enhanced at a greater extent.

Keywords: artificial neural network, data clustering, self organization feature map, wireless sensor network

Procedia PDF Downloads 511
9251 A Strategic Partner Evaluation Model for the Project Based Enterprises

Authors: Woosik Jang, Seung H. Han

Abstract:

The optimal partner selection is one of the most important factors to pursue the project’s success. However, in practice, there is a gaps in perception of success depending on the role of the enterprises for the projects. This frequently makes a relations between the partner evaluation results and the project’s final performances, insufficiently. To meet this challenges, this study proposes a strategic partner evaluation model considering the perception gaps between enterprises. A total 3 times of survey was performed; factor selection, perception gap analysis, and case application. After then total 8 factors are extracted from independent sample t-test and Borich model to set-up the evaluation model. Finally, through the case applications, only 16 enterprises are re-evaluated to “Good” grade among the 22 “Good” grade from existing model. On the contrary, 12 enterprises are re-evaluated to “Good” grade among the 19 “Bad” grade from existing model. Consequently, the perception gaps based evaluation model is expected to improve the decision making quality and also enhance the probability of project’s success.

Keywords: partner evaluation model, project based enterprise, decision making, perception gap, project performance

Procedia PDF Downloads 153
9250 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities

Authors: Anudeep Appe, Bhanu Poluparthi, Lakshmi Kasivajjula, Udai Mv, Sobha Bagadi, Punya Modi, Aditya Singh, Hemanth Gunupudi, Spenser Troiano, Jeff Paul, Justin Stovall, Justin Yamamoto

Abstract:

The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data is considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP, to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since its data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for ex. quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP (SHapley Additive exPlanations), a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.

Keywords: competition, DAGs, facility, healthcare, machine learning, market share, random forest, SHAP

Procedia PDF Downloads 87
9249 Proposal to Increase the Efficiency, Reliability and Safety of the Centre of Data Collection Management and Their Evaluation Using Cluster Solutions

Authors: Martin Juhas, Bohuslava Juhasova, Igor Halenar, Andrej Elias

Abstract:

This article deals with the possibility of increasing efficiency, reliability and safety of the system for teledosimetric data collection management and their evaluation as a part of complex study for activity “Research of data collection, their measurement and evaluation with mobile and autonomous units” within project “Research of monitoring and evaluation of non-standard conditions in the area of nuclear power plants”. Possible weaknesses in existing system are identified. A study of available cluster solutions with possibility of their deploying to analysed system is presented.

Keywords: teledosimetric data, efficiency, reliability, safety, cluster solution

Procedia PDF Downloads 508