Search results for: inherent feature
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2213

Search results for: inherent feature

1973 A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting

Authors: Analise Borg, Paul Micallef

Abstract:

Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature.

Keywords: audio fingerprinting, mapping algorithm, Gaussian Mixture Models, MFCC, MPEG-7

Procedia PDF Downloads 421
1972 Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD

Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi

Abstract:

Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.

Keywords: FSDWT, key frame extraction, shot detection, singular value decomposition

Procedia PDF Downloads 397
1971 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score

Procedia PDF Downloads 134
1970 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 162
1969 Non-Uniform Filter Banks-based Minimum Distance to Riemannian Mean Classifition in Motor Imagery Brain-Computer Interface

Authors: Ping Tan, Xiaomeng Su, Yi Shen

Abstract:

The motion intention in the motor imagery braincomputer interface is identified by classifying the event-related desynchronization (ERD) and event-related synchronization ERS characteristics of sensorimotor rhythm (SMR) in EEG signals. When the subject imagines different limbs or different parts moving, the rhythm components and bandwidth will change, which varies from person to person. How to find the effective sensorimotor frequency band of subjects is directly related to the classification accuracy of brain-computer interface. To solve this problem, this paper proposes a Minimum Distance to Riemannian Mean Classification method based on Non-Uniform Filter Banks. During the training phase, the EEG signals are decomposed into multiple different bandwidt signals by using multiple band-pass filters firstly; Then the spatial covariance characteristics of each frequency band signal are computered to be as the feature vectors. these feature vectors will be classified by the MDRM (Minimum Distance to Riemannian Mean) method, and cross validation is employed to obtain the effective sensorimotor frequency bands. During the test phase, the test signals are filtered by the bandpass filter of the effective sensorimotor frequency bands, and the extracted spatial covariance feature vectors will be classified by using the MDRM. Experiments on the BCI competition IV 2a dataset show that the proposed method is superior to other classification methods.

Keywords: non-uniform filter banks, motor imagery, brain-computer interface, minimum distance to Riemannian mean

Procedia PDF Downloads 123
1968 Evaluation of Robust Feature Descriptors for Texture Classification

Authors: Jia-Hong Lee, Mei-Yi Wu, Hsien-Tsung Kuo

Abstract:

Texture is an important characteristic in real and synthetic scenes. Texture analysis plays a critical role in inspecting surfaces and provides important techniques in a variety of applications. Although several descriptors have been presented to extract texture features, the development of object recognition is still a difficult task due to the complex aspects of texture. Recently, many robust and scaling-invariant image features such as SIFT, SURF and ORB have been successfully used in image retrieval and object recognition. In this paper, we have tried to compare the performance for texture classification using these feature descriptors with k-means clustering. Different classifiers including K-NN, Naive Bayes, Back Propagation Neural Network , Decision Tree and Kstar were applied in three texture image sets - UIUCTex, KTH-TIPS and Brodatz, respectively. Experimental results reveal SIFTS as the best average accuracy rate holder in UIUCTex, KTH-TIPS and SURF is advantaged in Brodatz texture set. BP neuro network works best in the test set classification among all used classifiers.

Keywords: texture classification, texture descriptor, SIFT, SURF, ORB

Procedia PDF Downloads 369
1967 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 123
1966 The Meaning Structures of Political Participation of Young Women: Preliminary Findings in a Practical Phenomenology Study

Authors: Amanda Aliende da Matta, Maria del Pilar Fogueiras Bertomeu, Valeria de Ormaechea Otalora, Maria Paz Sandin Esteban, Miriam Comet Donoso

Abstract:

This communication presents the preliminary emerging themes in a research on political participation of young women. The study follows a qualitative methodology; in particular, the applied hermeneutic phenomenological method, and the general objective of the research is to give an account of the experience of political participation as young women. The study participants are women aged 18 to 35 who have experience in political participation. The techniques of data collection are the descriptive story and the phenomenological interview. With respect to the first methodological steps, these have been: 1) collect and select stories of lived experience in political participation, 2) select descriptions of lived experience (DLEs) in political participation of the chosen stories, 3) to prepare phenomenological interviews from the selected DLEs, 4) to conduct phenomenological thematic analysis (PTA) of the DLEs. We have so far initiated the PTA on 5 vignettes. Hermeneutic phenomenology as a research approach is based on phenomenological philosophy and applied hermeneutics. Phenomenology is a descriptive philosophy of pure experience and essences, through which we seek to capture an experience at its origins without categorizing, interpreting or theorizing it. Hermeneutics, on the other hand, may be defined as a philosophical current that can be applied to data analysis. Max Van Manen wrote that hermeneutic phenomenology is a method of abstemious reflection on the basic structures of the lived experience of human existence. In hermeneutic phenomenology we focus, then, on the way we experience “things” in the first person, seeking to capture the world exactly as we experience it, not as we categorize or conceptualize it. In this study, the empirical methods used were: Lived experience description (written) and conversational interview. For these short stories, participants were asked: “What was your lived experience of participation in politics as a young woman? Can you tell me any stories or anecdotes that you think exemplify or typify your experience?”. The questions were accompanied by a list of guidelines for writing descriptive vignettes. And the analytical method was PTA. Among the provisional results, we found preliminary emerging themes, which could in the advance of the investigation result in meaning structures of political participation of young women. They are the following: - Complicity may be inherent/essential in political participation as a young woman; - Feelings may be essential/inherent in political participation as a young woman; - Hope may be essential in authentic political participation as a young woman; - Frustration may be essential in authentic political participation as a young woman; - Satisfaction may be essential in authentic political participation as a young woman; - There may be tension between individual/collective inherent/essential in political participation as a young woman; - Political participation as a young woman may include moments of public demonstration.

Keywords: applied hermeneutic phenomenology, hermeneutics, phenomenology, political participation

Procedia PDF Downloads 99
1965 Russian Spatial Impersonal Sentence Models in Translation Perspective

Authors: Marina Fomina

Abstract:

The paper focuses on the category of semantic subject within the framework of a functional approach to linguistics. The semantic subject is related to similar notions such as the grammatical subject and the bearer of predicative feature. It is the multifaceted nature of the category of subject that 1) triggers a number of issues that, syntax-wise, remain to be dealt with (cf. semantic vs. syntactic functions / sentence parts vs. parts of speech issues, etc.); 2) results in a variety of approaches to the category of subject, such as formal grammatical, semantic/syntactic (functional), communicative approaches, etc. Many linguists consider the prototypical approach to the category of subject to be the most instrumental as it reveals the integrity of denotative and linguistic components of the conceptual category. This approach relates to subject as a source of non-passive predicative feature, an element of subject-predicate-object situation that can take on a variety of semantic roles, cf.: 1) an agent (He carefully surveyed the valley stretching before him), 2) an experiencer (I feel very bitter about this), 3) a recipient (I received this book as a gift), 4) a causee (The plane broke into three pieces), 5) a patient (This stove cleans easily), etc. It is believed that the variety of roles stems from the radial (prototypical) structure of the category with some members more central than others. Translation-wise, the most “treacherous” subject types are the peripheral ones. The paper 1) features a peripheral status of spatial impersonal sentence models such as U menia v ukhe zvenit (lit. I-Gen. in ear buzzes) within the category of semantic subject, 2) makes a structural and semantic analysis of the models, 3) focuses on their Russian-English translation patterns, 4) reveals non-prototypical features of subjects in the English equivalents.

Keywords: bearer of predicative feature, grammatical subject, impersonal sentence model, semantic subject

Procedia PDF Downloads 370
1964 Lithuanian Sign Language Literature: Metaphors at the Phonological Level

Authors: Anželika Teresė

Abstract:

In order to solve issues in sign language linguistics, address matters pertaining to maintaining high quality of sign language (SL) translation, contribute to dispelling misconceptions about SL and deaf people, and raise awareness and understanding of the deaf community heritage, this presentation discusses literature in Lithuanian Sign Language (LSL) and inherent metaphors that are created by using the phonological parameter –handshape, location, movement, palm orientation and nonmanual features. The study covered in this presentation is twofold, involving both the micro-level analysis of metaphors in terms of phonological parameters as a sub-lexical feature and the macro-level analysis of the poetic context. Cognitive theories underlie research of metaphors in sign language literature in a range of SL. The study follows this practice. The presentation covers the qualitative analysis of 34 pieces of LSL literature. The analysis employs ELAN software widely used in SL research. The target is to examine how specific types of each phonological parameter are used for the creation of metaphors in LSL literature and what metaphors are created. The results of the study show that LSL literature employs a range of metaphors created by using classifier signs and by modifying the established signs. The study also reveals that LSL literature tends to create reference metaphors indicating status and power. As the study shows, LSL poets metaphorically encode status by encoding another meaning in the same sign, which results in creating double metaphors. The metaphor of identity has been determined. Notably, the poetic context has revealed that the latter metaphor can also be identified as a metaphor for life. The study goes on to note that deaf poets create metaphors related to the importance of various phenomena significance of the lyrical subject. Notably, the study has allowed detecting locations, nonmanual features and etc., never mentioned in previous SL research as used for the creation of metaphors.

Keywords: Lithuanian sign language, sign language literature, sign language metaphor, metaphor at the phonological level, cognitive linguistics

Procedia PDF Downloads 136
1963 Using Machine Learning Techniques for Autism Spectrum Disorder Analysis and Detection in Children

Authors: Norah Mohammed Alshahrani, Abdulaziz Almaleh

Abstract:

Autism Spectrum Disorder (ASD) is a condition related to issues with brain development that affects how a person recognises and communicates with others which results in difficulties with interaction and communication socially and it is constantly growing. Early recognition of ASD allows children to lead safe and healthy lives and helps doctors with accurate diagnoses and management of conditions. Therefore, it is crucial to develop a method that will achieve good results and with high accuracy for the measurement of ASD in children. In this paper, ASD datasets of toddlers and children have been analyzed. We employed the following machine learning techniques to attempt to explore ASD and they are Random Forest (RF), Decision Tree (DT), Na¨ıve Bayes (NB) and Support Vector Machine (SVM). Then Feature selection was used to provide fewer attributes from ASD datasets while preserving model performance. As a result, we found that the best result has been provided by the Support Vector Machine (SVM), achieving 0.98% in the toddler dataset and 0.99% in the children dataset.

Keywords: autism spectrum disorder, machine learning, feature selection, support vector machine

Procedia PDF Downloads 150
1962 Alternator Fault Detection Using Wigner-Ville Distribution

Authors: Amin Ranjbar, Amir Arsalan Jalili Zolfaghari, Amir Abolfazl Suratgar, Mehrdad Khajavi

Abstract:

This paper describes two stages of learning-based fault detection procedure in alternators. The procedure consists of three states of machine condition namely shortened brush, high impedance relay and maintaining a healthy condition in the alternator. The fault detection algorithm uses Wigner-Ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.

Keywords: alternator, artificial neural network, support vector machine, time-frequency analysis, Wigner-Ville distribution

Procedia PDF Downloads 373
1961 Reducing the Imbalance Penalty Through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey

Authors: Hayriye Anıl, Görkem Kar

Abstract:

In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations since geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning, and, time series methods, the total generation of the power plants belonging to Zorlu Natural Electricity Generation, which has a high installed capacity in terms of geothermal, was estimated for the first one and two weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.

Keywords: machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting

Procedia PDF Downloads 110
1960 A Dynamic Software Product Line Approach to Self-Adaptive Genetic Algorithms

Authors: Abdelghani Alidra, Mohamed Tahar Kimour

Abstract:

Genetic algorithm must adapt themselves at design time to cope with the search problem specific requirements and at runtime to balance exploration and convergence objectives. In a previous article, we have shown that modeling and implementing Genetic Algorithms (GA) using the software product line (SPL) paradigm is very appreciable because they constitute a product family sharing a common base of code. In the present article we propose to extend the use of the feature model of the genetic algorithms family to model the potential states of the GA in what is called a Dynamic Software Product Line. The objective of this paper is the systematic generation of a reconfigurable architecture that supports the dynamic of the GA and which is easily deduced from the feature model. The resultant GA is able to perform dynamic reconfiguration autonomously to fasten the convergence process while producing better solutions. Another important advantage of our approach is the exploitation of recent advances in the domain of dynamic SPLs to enhance the performance of the GAs.

Keywords: self-adaptive genetic algorithms, software engineering, dynamic software product lines, reconfigurable architecture

Procedia PDF Downloads 285
1959 A Framework for Auditing Multilevel Models Using Explainability Methods

Authors: Debarati Bhaumik, Diptish Dey

Abstract:

Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.

Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics

Procedia PDF Downloads 93
1958 Machine Vision System for Measuring the Quality of Bulk Sun-dried Organic Raisins

Authors: Navab Karimi, Tohid Alizadeh

Abstract:

An intelligent vision-based system was designed to measure the quality and purity of raisins. A machine vision setup was utilized to capture the images of bulk raisins in ranges of 5-50% mixed pure-impure berries. The textural features of bulk raisins were extracted using Grey-level Histograms, Co-occurrence Matrix, and Local Binary Pattern (a total of 108 features). Genetic Algorithm and neural network regression were used for selecting and ranking the best features (21 features). As a result, the GLCM features set was found to have the highest accuracy (92.4%) among the other sets. Followingly, multiple feature combinations of the previous stage were fed into the second regression (linear regression) to increase accuracy, wherein a combination of 16 features was found to be the optimum. Finally, a Support Vector Machine (SVM) classifier was used to differentiate the mixtures, producing the best efficiency and accuracy of 96.2% and 97.35%, respectively.

Keywords: sun-dried organic raisin, genetic algorithm, feature extraction, ann regression, linear regression, support vector machine, south azerbaijan.

Procedia PDF Downloads 73
1957 The Customization of 3D Last Form Design Based on Weighted Blending

Authors: Shih-Wen Hsiao, Chu-Hsuan Lee, Rong-Qi Chen

Abstract:

When it comes to last, it is regarded as the critical foundation of shoe design and development. Not only the last relates to the comfort of shoes wearing but also it aids the production of shoe styling and manufacturing. In order to enhance the efficiency and application of last development, a computer aided methodology for customized last form designs is proposed in this study. The reverse engineering is mainly applied to the process of scanning for the last form. Then the minimum energy is used for the revision of surface continuity, the surface of the last is reconstructed with the feature curves of the scanned last. When the surface of a last is reconstructed, based on the foundation of the proposed last form reconstruction module, the weighted arithmetic mean method is applied to the calculation on the shape morphing which differs from the grading for the control mesh of last, and the algorithm of subdivision is used to create the surface of last mesh, thus the feet-fitting 3D last form of different sizes is generated from its original form feature with functions remained. Finally, the practicability of the proposed methodology is verified through later case studies.

Keywords: 3D last design, customization, reverse engineering, weighted morphing, shape blending

Procedia PDF Downloads 339
1956 Analyzing Apposition and the Typology of Specific Reference in Newspaper Discourse in Nigeria

Authors: Monday Agbonica Bello Eje

Abstract:

The language of the print media is characterized by the use of apposition. This linguistic element function strategically in journalistic discourse where it is communicatively necessary to name individuals and provide information about them. Linguistic studies on the language of the print media with bias for apposition have largely dwelt on other areas but the examination of the typology of appositive reference in newspaper discourse. Yet, it is capable of revealing ways writers communicate and provide information necessary for readers to follow and understand the message. The study, therefore, analyses the patterns of appositional occurrences and the typology of reference in newspaper articles. The data were obtained from The Punch and Daily Trust Newspapers. A total of six editions of these newspapers were collected randomly spread over three months. News and feature articles were used in the analysis. Guided by the referential theory of meaning in discourse, the appositions identified were subjected to analysis. The findings show that the semantic relation of coreference and speaker coreference have the highest percentage and frequency of occurrence in the data. This is because the subject matter of news reports and feature articles focuses on humans and the events around them; as a result, readers need to be provided with some form of detail and background information in order to identify as well as follow the discourse. Also, the non-referential relation of absolute synonymy and speaker synonymy no doubt have fewer occurrences and percentages in the analysis. This is tied to a major feature of the language of the media: simplicity. The paper concludes that appositions is mainly used for the purpose of providing the reader with much detail. In this way, the writer transmits information which helps him not only to give detailed yet concise descriptions but also in some way help the reader to follow the discourse.

Keywords: apposition, discourse, newspaper, Nigeria, reference

Procedia PDF Downloads 173
1955 Capturing the Stress States in Video Conferences by Photoplethysmographic Pulse Detection

Authors: Jarek Krajewski, David Daxberger

Abstract:

We propose a stress detection method based on an RGB camera using heart rate detection, also known as Photoplethysmography Imaging (PPGI). This technique focuses on the measurement of the small changes in skin colour caused by blood perfusion. A stationary lab setting with simulated video conferences is chosen using constant light conditions and a sampling rate of 30 fps. The ground truth measurement of heart rate is conducted with a common PPG system. The proposed approach for pulse peak detection is based on a machine learning-based approach, applying brute force feature extraction for the prediction of heart rate pulses. The statistical analysis showed good agreement (correlation r = .79, p<0.05) between the reference heart rate system and the proposed method. Based on these findings, the proposed method could provide a reliable, low-cost, and contactless way of measuring HR parameters in daily-life environments.

Keywords: heart rate, PPGI, machine learning, brute force feature extraction

Procedia PDF Downloads 123
1954 Machine Learning for Feature Selection and Classification of Systemic Lupus Erythematosus

Authors: H. Zidoum, A. AlShareedah, S. Al Sawafi, A. Al-Ansari, B. Al Lawati

Abstract:

Systemic lupus erythematosus (SLE) is an autoimmune disease with genetic and environmental components. SLE is characterized by a wide variability of clinical manifestations and a course frequently subject to unpredictable flares. Despite recent progress in classification tools, the early diagnosis of SLE is still an unmet need for many patients. This study proposes an interpretable disease classification model that combines the high and efficient predictive performance of CatBoost and the model-agnostic interpretation tools of Shapley Additive exPlanations (SHAP). The CatBoost model was trained on a local cohort of 219 Omani patients with SLE as well as other control diseases. Furthermore, the SHAP library was used to generate individual explanations of the model's decisions as well as rank clinical features by contribution. Overall, we achieved an AUC score of 0.945, F1-score of 0.92 and identified four clinical features (alopecia, renal disorders, cutaneous lupus, and hemolytic anemia) along with the patient's age that was shown to have the greatest contribution on the prediction.

Keywords: feature selection, classification, systemic lupus erythematosus, model interpretation, SHAP, Catboost

Procedia PDF Downloads 83
1953 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection

Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra

Abstract:

In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.

Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging

Procedia PDF Downloads 86
1952 Feature Extraction Based on Contourlet Transform and Log Gabor Filter for Detection of Ulcers in Wireless Capsule Endoscopy

Authors: Nimisha Elsa Koshy, Varun P. Gopi, V. I. Thajudin Ahamed

Abstract:

The entire visualization of GastroIntestinal (GI) tract is not possible with conventional endoscopic exams. Wireless Capsule Endoscopy (WCE) is a low risk, painless, noninvasive procedure for diagnosing diseases such as bleeding, polyps, ulcers, and Crohns disease within the human digestive tract, especially the small intestine that was unreachable using the traditional endoscopic methods. However, analysis of massive images of WCE detection is tedious and time consuming to physicians. Hence, researchers have developed software methods to detect these diseases automatically. Thus, the effectiveness of WCE can be improved. In this paper, a novel textural feature extraction method is proposed based on Contourlet transform and Log Gabor filter to distinguish ulcer regions from normal regions. The results show that the proposed method performs well with a high accuracy rate of 94.16% using Support Vector Machine (SVM) classifier in HSV colour space.

Keywords: contourlet transform, log gabor filter, ulcer, wireless capsule endoscopy

Procedia PDF Downloads 540
1951 Hybrid Deep Learning and FAST-BRISK 3D Object Detection Technique for Bin-Picking Application

Authors: Thanakrit Taweesoontorn, Sarucha Yanyong, Poom Konghuayrob

Abstract:

Robotic arms have gained popularity in various industries due to their accuracy and efficiency. This research proposes a method for bin-picking tasks using the Cobot, combining the YOLOv5 CNNs model for object detection and pose estimation with traditional feature detection (FAST), feature description (BRISK), and matching algorithms. By integrating these algorithms and utilizing a small-scale depth sensor camera for capturing depth and color images, the system achieves real-time object detection and accurate pose estimation, enabling the robotic arm to pick objects correctly in both position and orientation. Furthermore, the proposed method is implemented within the ROS framework to provide a seamless platform for robotic control and integration. This integration of robotics, cameras, and AI technology contributes to the development of industrial robotics, opening up new possibilities for automating challenging tasks and improving overall operational efficiency.

Keywords: robotic vision, image processing, applications of robotics, artificial intelligent

Procedia PDF Downloads 96
1950 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models

Authors: Danielle Shackley, Yetunde Folajimi

Abstract:

As more people turn to the internet seeking health-related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores to text, ranging from positive, neutral, and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing and tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial, and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced, and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process and substituting the Naive Bayes for a deep learning neural network model.

Keywords: sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model

Procedia PDF Downloads 97
1949 Algorithm Research on Traffic Sign Detection Based on Improved EfficientDet

Authors: Ma Lei-Lei, Zhou You

Abstract:

Aiming at the problems of low detection accuracy of deep learning algorithm in traffic sign detection, this paper proposes improved EfficientDet based traffic sign detection algorithm. Multi-head self-attention is introduced in the minimum resolution layer of the backbone of EfficientDet to achieve effective aggregation of local and global depth information, and this study proposes an improved feature fusion pyramid with increased vertical cross-layer connections, which improves the performance of the model while introducing a small amount of complexity, the Balanced L1 Loss is introduced to replace the original regression loss function Smooth L1 Loss, which solves the problem of balance in the loss function. Experimental results show, the algorithm proposed in this study is suitable for the task of traffic sign detection. Compared with other models, the improved EfficientDet has the best detection accuracy. Although the test speed is not completely dominant, it still meets the real-time requirement.

Keywords: convolutional neural network, transformer, feature pyramid networks, loss function

Procedia PDF Downloads 97
1948 Milling Simulations with a 3-DOF Flexible Planar Robot

Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden

Abstract:

Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.

Keywords: control, milling, multibody, robotic, simulation

Procedia PDF Downloads 248
1947 COVID-19 Analysis with Deep Learning Model Using Chest X-Rays Images

Authors: Uma Maheshwari V., Rajanikanth Aluvalu, Kumar Gautam

Abstract:

The COVID-19 disease is a highly contagious viral infection with major worldwide health implications. The global economy suffers as a result of COVID. The spread of this pandemic disease can be slowed if positive patients are found early. COVID-19 disease prediction is beneficial for identifying patients' health problems that are at risk for COVID. Deep learning and machine learning algorithms for COVID prediction using X-rays have the potential to be extremely useful in solving the scarcity of doctors and clinicians in remote places. In this paper, a convolutional neural network (CNN) with deep layers is presented for recognizing COVID-19 patients using real-world datasets. We gathered around 6000 X-ray scan images from various sources and split them into two categories: normal and COVID-impacted. Our model examines chest X-ray images to recognize such patients. Because X-rays are commonly available and affordable, our findings show that X-ray analysis is effective in COVID diagnosis. The predictions performed well, with an average accuracy of 99% on training photographs and 88% on X-ray test images.

Keywords: deep CNN, COVID–19 analysis, feature extraction, feature map, accuracy

Procedia PDF Downloads 79
1946 Sentiment Analysis: An Enhancement of Ontological-Based Features Extraction Techniques and Word Equations

Authors: Mohd Ridzwan Yaakub, Muhammad Iqbal Abu Latiffi

Abstract:

Online business has become popular recently due to the massive amount of information and medium available on the Internet. This has resulted in the huge number of reviews where the consumers share their opinion, criticisms, and satisfaction on the products they have purchased on the websites or the social media such as Facebook and Twitter. However, to analyze customer’s behavior has become very important for organizations to find new market trends and insights. The reviews from the websites or the social media are in structured and unstructured data that need a sentiment analysis approach in analyzing customer’s review. In this article, techniques used in will be defined. Definition of the ontology and description of its possible usage in sentiment analysis will be defined. It will lead to empirical research that related to mobile phones used in research and the ontology used in the experiment. The researcher also will explore the role of preprocessing data and feature selection methodology. As the result, ontology-based approach in sentiment analysis can help in achieving high accuracy for the classification task.

Keywords: feature selection, ontology, opinion, preprocessing data, sentiment analysis

Procedia PDF Downloads 200
1945 Defect Correlation of Computed Tomography and Serial Sectioning in Additively Manufactured Ti-6Al-4V

Authors: Bryce R. Jolley, Michael Uchic

Abstract:

This study presents initial results toward the correlative characterization of inherent defects of Ti-6Al-4V additive manufacture (AM). X-Ray Computed Tomography (CT) defect data are compared and correlated with microscopic photographs obtained via automated serial sectioning. The metal AM specimen was manufactured out of Ti-6Al-4V virgin powder to specified dimensions. A post-contour was applied during the fabrication process with a speed of 1050 mm/s, power of 260 W, and a width of 140 µm. The specimen was stress relief heat-treated at 16°F for 3 hours. Microfocus CT imaging was accomplished on the specimen within a predetermined region of the build. Microfocus CT imaging was conducted with parameters optimized for Ti-6Al-4V additive manufacture. After CT imaging, a modified RoboMet. 3D version 2 was employed for serial sectioning and optical microscopy characterization of the same predetermined region. Automated montage capture with sub-micron resolution, bright-field reflection, 12-bit monochrome optical images were performed in an automated fashion. These optical images were post-processed to produce 2D and 3D data sets. This processing included thresholding and segmentation to improve visualization of defect features. The defects observed from optical imaging were compared and correlated with the defects observed from CT imaging over the same predetermined region of the specimen. Quantitative results of area fraction and equivalent pore diameters obtained via each method are presented for this correlation. It is shown that Microfocus CT imaging does not capture all inherent defects within this Ti-6Al-4V AM sample. Best practices for this correlative effort are also presented as well as the future direction of research resultant from this current study.

Keywords: additive manufacture, automated serial sectioning, computed tomography, nondestructive evaluation

Procedia PDF Downloads 141
1944 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association

Authors: Jacky Liu

Abstract:

This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.

Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation

Procedia PDF Downloads 102