Search results for: fake health news classification model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25411

Search results for: fake health news classification model

25351 Constriction of Economic News over Business and Financial News: Analysis of the Change in Indian Business-Papers over the Past Three Decades

Authors: Disha Batra

Abstract:

With the advent of economic reforms in India in 1992, economic journalism in India has undergone a sea change along with the rise in the Indian economy. Squeezing out of economic news stories (economy-in-general) over business (individual corporate stories) and financial (financial and equity markets) news stories have been done and are still underway. The objective of the study is to explore how economic journalism – news stories about macroeconomic issues or economy-in-general has changed over the past three decades with the emergence of LPG (Liberalisation, Privatisation, and Globalisation) policies in India. The purpose of the study is to examine to what extent business and financial news are constricting economic news which is done by analysing news stories and content of business papers. The study is based on the content analyses of the top three Indian business dailies as per IRS (Indian Readership Survey) 2017. The parametric analysis of the different parameters (source of information, sub-topic, a dominant source in economic news, layout and framing, etc.) has been done in order to come across with the distinct adaptations and modifications by these dailies. The paper significantly dwells upon the thematic analysis of these newspapers in order to explore and find out the coverage given to various sub-themes of EBF (economic, business, and financial) journalism. The study revealed that stories concerning broader issues about the economy which are likely to be of public concern had been dropped. The paper further indicates an upward trend for the stories concerning individual corporate, equity, and financial markets. Findings of the study raise concern over the indicated disparity between economic and business news stories which may further limit the information that people need in order to make well-versed decisions.

Keywords: business-papers, business news, economic news, financial news

Procedia PDF Downloads 112
25350 Enhancement Method of Network Traffic Anomaly Detection Model Based on Adversarial Training With Category Tags

Authors: Zhang Shuqi, Liu Dan

Abstract:

For the problems in intelligent network anomaly traffic detection models, such as low detection accuracy caused by the lack of training samples, poor effect with small sample attack detection, a classification model enhancement method, F-ACGAN(Flow Auxiliary Classifier Generative Adversarial Network) which introduces generative adversarial network and adversarial training, is proposed to solve these problems. Generating adversarial data with category labels could enhance the training effect and improve classification accuracy and model robustness. FACGAN consists of three steps: feature preprocess, which includes data type conversion, dimensionality reduction and normalization, etc.; A generative adversarial network model with feature learning ability is designed, and the sample generation effect of the model is improved through adversarial iterations between generator and discriminator. The adversarial disturbance factor of the gradient direction of the classification model is added to improve the diversity and antagonism of generated data and to promote the model to learn from adversarial classification features. The experiment of constructing a classification model with the UNSW-NB15 dataset shows that with the enhancement of FACGAN on the basic model, the classification accuracy has improved by 8.09%, and the score of F1 has improved by 6.94%.

Keywords: data imbalance, GAN, ACGAN, anomaly detection, adversarial training, data augmentation

Procedia PDF Downloads 80
25349 The Semiotic Analysis of Thai Social Contexts in Thai Post’s News Articles

Authors: Pakpoom Hannapha

Abstract:

This paper investigates the implications of social and political contexts in Thai Post’s news articles written by a columnist, Khon Plai Soy. Samples included twenty eight news articles published between 28th May 2015 and 28th June 2015 selected and analyzed according to Semiotics including implications, connotation, cultural politics, and Thai usage in newspaper articles. The data analysis can be divided into two parts; first, an analysis of signs/signifiers appearing in the articles and second, an analysis of the columnist’s purposes. This study demonstrated representations of signs in the selected articles that were categorized into four groups: events, actions, persons, and organizations. In this study, implications of the news articles were analyzed in two aspects according to Semiotics. It was found that the columnist mostly points out purposes for education, facts, and personal opinions in his works. Also, he offers some solutions to problems discussed in the articles. The writer often explicated knowledge and facts in accordance with either his personal opinions or problem-solutions. According to the research result, studying the implications of news articles in the Thai Post based on the Semiotic approach can help clarify and understand connotative meanings in terms of contents and the writer’s purposes. This paper can enhance readers’ understanding of Semiotic implications through signs and meanings in the texts and thus be used as a model to explore other political news articles

Keywords: semiotic analysis, Thai social contexts, Thai Post’s news, articles

Procedia PDF Downloads 211
25348 Model of Optimal Centroids Approach for Multivariate Data Classification

Authors: Pham Van Nha, Le Cam Binh

Abstract:

Particle swarm optimization (PSO) is a population-based stochastic optimization algorithm. PSO was inspired by the natural behavior of birds and fish in migration and foraging for food. PSO is considered as a multidisciplinary optimization model that can be applied in various optimization problems. PSO’s ideas are simple and easy to understand but PSO is only applied in simple model problems. We think that in order to expand the applicability of PSO in complex problems, PSO should be described more explicitly in the form of a mathematical model. In this paper, we represent PSO in a mathematical model and apply in the multivariate data classification. First, PSOs general mathematical model (MPSO) is analyzed as a universal optimization model. Then, Model of Optimal Centroids (MOC) is proposed for the multivariate data classification. Experiments were conducted on some benchmark data sets to prove the effectiveness of MOC compared with several proposed schemes.

Keywords: analysis of optimization, artificial intelligence based optimization, optimization for learning and data analysis, global optimization

Procedia PDF Downloads 183
25347 The Factors Predicting Credibility of News in Social Media in Thailand

Authors: Ekapon Thienthaworn

Abstract:

This research aims to study the reliability of the forecasting factor in social media by using survey research methods with questionnaires. The sampling is the group of undergraduate students in Bangkok. A multiple-step random number of 400 persons, data analysis are descriptive statistics with multivariate regression analysis. The research found the average of the overall trust at the intermediate level for reading the news in social media and the results of the multivariate regression analysis to find out the factors that forecast credibility of the media found the only content that has the power to forecast reliability of undergraduate students in Bangkok to reading the news on social media at the significance level.at 0.05.These can be factors with forecasts reliability of news in social media by a variable that has the highest influence factor of the media content and the speed is also important for reliability of the news.

Keywords: credibility of news, behaviors and attitudes, social media, web board

Procedia PDF Downloads 448
25346 Impact of New Media Technologies to News, Social Interactions, and Traditional Media

Authors: Ademola Bamgbose

Abstract:

The new media revolution, which encompasses a wide variety of new media technologies like blogs, social networking, visual worlds, wikis, have had a great influence on communications, traditional media and across other disciplines. This paper gives a review of the impact of new media technologies on the news, social interactions and traditional media in developing and developed countries. The study points to the fact that there is a significant impact of new media technologies on the news, social interactions and the traditional media in developing and developed countries, albeit both positively and negatively. Social interactions have been significantly affected, as well as in news production and reporting. It is reiterated that despite the pervasiveness of new media technologies, it would not bring to a total decline of traditional media. This paper contributes to the theoretical framework on the new media and will help to assess the extent of the impact of the new media in different locations.

Keywords: communication, media, news, new media technologies, social interactions, traditional media

Procedia PDF Downloads 245
25345 Multinomial Dirichlet Gaussian Process Model for Classification of Multidimensional Data

Authors: Wanhyun Cho, Soonja Kang, Sanggoon Kim, Soonyoung Park

Abstract:

We present probabilistic multinomial Dirichlet classification model for multidimensional data and Gaussian process priors. Here, we have considered an efficient computational method that can be used to obtain the approximate posteriors for latent variables and parameters needed to define the multiclass Gaussian process classification model. We first investigated the process of inducing a posterior distribution for various parameters and latent function by using the variational Bayesian approximations and important sampling method, and next we derived a predictive distribution of latent function needed to classify new samples. The proposed model is applied to classify the synthetic multivariate dataset in order to verify the performance of our model. Experiment result shows that our model is more accurate than the other approximation methods.

Keywords: multinomial dirichlet classification model, Gaussian process priors, variational Bayesian approximation, importance sampling, approximate posterior distribution, marginal likelihood evidence

Procedia PDF Downloads 414
25344 Global News Coverage of the Pandemic: Towards an Ethical Framework for Media Professionalism

Authors: Anantha S. Babbili

Abstract:

This paper analyzes the current media practices dominant in global journalistic practices within the framework of world press theories of Libertarian, Authoritarian, Communist, and Social Responsibility to evaluate their efficacy in addressing their role in the coverage of the coronavirus, also known as COVID-19. The global media flows, determinants of news coverage, and international awareness and the Western view of the world will be critically analyzed within the context of the prevalent news values that underpin free press and media coverage of the world. While evaluating the global discourse paramount to a sustained and dispassionate understanding of world events, this paper proposes an ethical framework that brings clarity devoid of sensationalism, partisanship, right-wing and left-wing interpretations to a breaking and dangerous development of a pandemic. As the world struggles to contain the coronavirus pandemic with death climbing close to 6,000 from late January to mid-March, 2020, the populations of the developed as well as the developing nations are beset with news media renditions of the crisis that are contradictory, confusing and evoking anxiety, fear and hysteria. How are we to understand differing news standards and news values? What lessons do we as journalism and mass media educators, researchers, and academics learn in order to construct a better news model and structure of media practice that addresses science, health, and media literacy among media practitioners, journalists, and news consumers? As traditional media struggles to cover the pandemic to its audience and consumers, social media from which an increasing number of consumers get their news have exerted their influence both in a positive way and in a negative manner. Even as the world struggles to grasp the full significance of the pandemic, the World Health Organization (WHO) has been feverishly battling an additional challenge related to the pandemic in what it termed an 'infodemic'—'an overabundance of information, some accurate and some not, that makes it hard for people to find trustworthy sources and reliable guidance when they need it.' There is, indeed, a need for journalism and news coverage in times of pandemics that reflect social responsibility and ethos of public service journalism. Social media and high-tech information corporations, collectively termed GAMAF—Google, Apple, Microsoft, Amazon, and Facebook – can team up with reliable traditional media—newspapers, magazines, book publishers, radio and television corporates—to ease public emotions and be helpful in times of a pandemic outbreak. GAMAF can, conceivably, weed out sensational and non-credible sources of coronavirus information, exotic cures offered for sale on a quick fix, and demonetize videos that exploit peoples’ vulnerabilities at the lowest ebb. Credible news of utility delivered in a sustained, calm, and reliable manner serves people in a meaningful and helpful way. The world’s consumers of news and information, indeed, deserve a healthy and trustworthy news media – at least in the time of pandemic COVID-19. Towards this end, the paper will propose a practical model for news media and journalistic coverage during times of a pandemic.

Keywords: COVID-19, international news flow, social media, social responsibility

Procedia PDF Downloads 87
25343 Predication Model for Leukemia Diseases Based on Data Mining Classification Algorithms with Best Accuracy

Authors: Fahd Sabry Esmail, M. Badr Senousy, Mohamed Ragaie

Abstract:

In recent years, there has been an explosion in the rate of using technology that help discovering the diseases. For example, DNA microarrays allow us for the first time to obtain a "global" view of the cell. It has great potential to provide accurate medical diagnosis, to help in finding the right treatment and cure for many diseases. Various classification algorithms can be applied on such micro-array datasets to devise methods that can predict the occurrence of Leukemia disease. In this study, we compared the classification accuracy and response time among eleven decision tree methods and six rule classifier methods using five performance criteria. The experiment results show that the performance of Random Tree is producing better result. Also it takes lowest time to build model in tree classifier. The classification rules algorithms such as nearest- neighbor-like algorithm (NNge) is the best algorithm due to the high accuracy and it takes lowest time to build model in classification.

Keywords: data mining, classification techniques, decision tree, classification rule, leukemia diseases, microarray data

Procedia PDF Downloads 295
25342 Analyzing the Efficiency of Initiatives Taken against Disinformation during Election Campaigns: Case Study of Young Voters

Authors: Fatima-Zohra Ghedir

Abstract:

Social media platforms have been actively working on solutions and combined their efforts with media, policy makers, educators and researchers to protect citizens and prevent interferences in information, political discourses and elections. Facebook, for instance, deleted fake accounts, implemented fake accounts and fake content detection algorithms, partnered with news agencies to manually fact check content and changed its newsfeeds display. Twitter and Instagram regularly communicate on their efforts and notify their users of improvements and safety guidelines. More funds have been allocated to media literacy programs to empower citizens in prevision of the coming elections. This paper investigates the efficiency of these initiatives and analyzes the metrics to measure their success or failure. The objective is also to determine the segments of population more prone to fall in disinformation traps during the elections despite the measures taken over the last four years. This study will also examine the groups who were positively impacted by these measures. This paper relies on both desk and field methodologies. For this study, a survey was administered to French students aged between 17 and 29 years old. Semi-guided interviews were conducted on a similar audience. The analysis of the survey and of the interviews show that respondents were exposed to the initiatives described above and are aware of the existence of disinformation issues. However, they do not understand what disinformation really entails or means. For instance, for most of them, disinformation is synonymous of the opposite point of view without taking into account the truthfulness of the content. Besides, they still consume and believe the information shared by their friends and family, with little questioning about the ways their closed ones get informed.

Keywords: democratic elections, disinformation, foreign interference, social media, success metrics

Procedia PDF Downloads 85
25341 Statistical Classification, Downscaling and Uncertainty Assessment for Global Climate Model Outputs

Authors: Queen Suraajini Rajendran, Sai Hung Cheung

Abstract:

Statistical down scaling models are required to connect the global climate model outputs and the local weather variables for climate change impact prediction. For reliable climate change impact studies, the uncertainty associated with the model including natural variability, uncertainty in the climate model(s), down scaling model, model inadequacy and in the predicted results should be quantified appropriately. In this work, a new approach is developed by the authors for statistical classification, statistical down scaling and uncertainty assessment and is applied to Singapore rainfall. It is a robust Bayesian uncertainty analysis methodology and tools based on coupling dependent modeling error with classification and statistical down scaling models in a way that the dependency among modeling errors will impact the results of both classification and statistical down scaling model calibration and uncertainty analysis for future prediction. Singapore data are considered here and the uncertainty and prediction results are obtained. From the results obtained, directions of research for improvement are briefly presented.

Keywords: statistical downscaling, global climate model, climate change, uncertainty

Procedia PDF Downloads 341
25340 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 137
25339 Heart Failure Identification and Progression by Classifying Cardiac Patients

Authors: Muhammad Saqlain, Nazar Abbas Saqib, Muazzam A. Khan

Abstract:

Heart Failure (HF) has become the major health problem in our society. The prevalence of HF has increased as the patient’s ages and it is the major cause of the high mortality rate in adults. A successful identification and progression of HF can be helpful to reduce the individual and social burden from this syndrome. In this study, we use a real data set of cardiac patients to propose a classification model for the identification and progression of HF. The data set has divided into three age groups, namely young, adult, and old and then each age group have further classified into four classes according to patient’s current physical condition. Contemporary Data Mining classification algorithms have been applied to each individual class of every age group to identify the HF. Decision Tree (DT) gives the highest accuracy of 90% and outperform all other algorithms. Our model accurately diagnoses different stages of HF for each age group and it can be very useful for the early prediction of HF.

Keywords: decision tree, heart failure, data mining, classification model

Procedia PDF Downloads 384
25338 Radical Web Text Classification Using a Composite-Based Approach

Authors: Kolade Olawande Owoeye, George R. S. Weir

Abstract:

The widespread of terrorism and extremism activities on the internet has become a major threat to the government and national securities due to their potential dangers which have necessitated the need for intelligence gathering via web and real-time monitoring of potential websites for extremist activities. However, the manual classification for such contents is practically difficult or time-consuming. In response to this challenge, an automated classification system called composite technique was developed. This is a computational framework that explores the combination of both semantics and syntactic features of textual contents of a web. We implemented the framework on a set of extremist webpages dataset that has been subjected to the manual classification process. Therein, we developed a classification model on the data using J48 decision algorithm, this is to generate a measure of how well each page can be classified into their appropriate classes. The classification result obtained from our method when compared with other states of arts, indicated a 96% success rate in classifying overall webpages when matched against the manual classification.

Keywords: extremist, web pages, classification, semantics, posit

Procedia PDF Downloads 123
25337 Lean Models Classification: Towards a Holistic View

Authors: Y. Tiamaz, N. Souissi

Abstract:

The purpose of this paper is to present a classification of Lean models which aims to capture all the concepts related to this approach and thus facilitate its implementation. This classification allows the identification of the most relevant models according to several dimensions. From this perspective, we present a review and an analysis of Lean models literature and we propose dimensions for the classification of the current proposals while respecting among others the axes of the Lean approach, the maturity of the models as well as their application domains. This classification allowed us to conclude that researchers essentially consider the Lean approach as a toolbox also they design their models to solve problems related to a specific environment. Since Lean approach is no longer intended only for the automotive sector where it was invented, but to all fields (IT, Hospital, ...), we consider that this approach requires a generic model that is capable of being implemented in all areas.

Keywords: lean approach, lean models, classification, dimensions, holistic view

Procedia PDF Downloads 411
25336 Novel Inference Algorithm for Gaussian Process Classification Model with Multiclass and Its Application to Human Action Classification

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

In this paper, we propose a novel inference algorithm for the multi-class Gaussian process classification model that can be used in the field of human behavior recognition. This algorithm can drive simultaneously both a posterior distribution of a latent function and estimators of hyper-parameters in a Gaussian process classification model with multi-class. Our algorithm is based on the Laplace approximation (LA) technique and variational EM framework. This is performed in two steps: called expectation and maximization steps. First, in the expectation step, using the Bayesian formula and LA technique, we derive approximately the posterior distribution of the latent function indicating the possibility that each observation belongs to a certain class in the Gaussian process classification model. Second, in the maximization step, using a derived posterior distribution of latent function, we compute the maximum likelihood estimator for hyper-parameters of a covariance matrix necessary to define prior distribution for latent function. These two steps iteratively repeat until a convergence condition satisfies. Moreover, we apply the proposed algorithm with human action classification problem using a public database, namely, the KTH human action data set. Experimental results reveal that the proposed algorithm shows good performance on this data set.

Keywords: bayesian rule, gaussian process classification model with multiclass, gaussian process prior, human action classification, laplace approximation, variational EM algorithm

Procedia PDF Downloads 309
25335 Metamorphic Computer Virus Classification Using Hidden Markov Model

Authors: Babak Bashari Rad

Abstract:

A metamorphic computer virus uses different code transformation techniques to mutate its body in duplicated instances. Characteristics and function of new instances are mostly similar to their parents, but they cannot be easily detected by the majority of antivirus in market, as they depend on string signature-based detection techniques. The purpose of this research is to propose a Hidden Markov Model for classification of metamorphic viruses in executable files. In the proposed solution, portable executable files are inspected to extract the instructions opcodes needed for the examination of code. A Hidden Markov Model trained on portable executable files is employed to classify the metamorphic viruses of the same family. The proposed model is able to generate and recognize common statistical features of mutated code. The model has been evaluated by examining the model on a test data set. The performance of the model has been practically tested and evaluated based on False Positive Rate, Detection Rate and Overall Accuracy. The result showed an acceptable performance with high average of 99.7% Detection Rate.

Keywords: malware classification, computer virus classification, metamorphic virus, metamorphic malware, Hidden Markov Model

Procedia PDF Downloads 292
25334 Metaphors in Egyptian News Headlines in Relation to the Egyptian Political Situation 2012-2013

Authors: Wesam Mohamed Abdel Khalek Ibrahim

Abstract:

This paper examines the use of metaphors in Arabic political news discourse, focusing particularly on the headlines of the news articles relating to the Egyptian political situation in the period from June 2012 to October 2013. Metaphors are skilfully manipulated in the headlines to influence the public stance towards several events and entities including Egypt, Muslim Brotherhood (MB), Morsi, the June 30th uprising, Al-Sisi and the Armed Forces. The findings reveal that Arabic political news discourse shares basic features with its English counterpart, namely the use of metaphors as persuasive strategies and the presence of certain target domains. Insights gained from this study feed back into the conceptual metaphor theory by providing further evidence to the universality of metaphors.

Keywords: conceptual metaphor theory, political discourse, news discourse, Egyptian political situation

Procedia PDF Downloads 483
25333 Analyzing Behaviour of the Utilization of the Online News Clipping Database: Experience in Suan Sunandha Rajabhat University

Authors: Siriporn Poolsuwan, Kanyarat Bussaban

Abstract:

This research aims to investigate and analyze user’s behaviour towards the utilization of the online news clipping database at Suan Sunandha Rajabhat University, Thailand. Data is gathered from 214 lecturers and 380 undergraduate students by using questionnaires. Findings show that most users knew the online news clipping service from their friends, library’s website and their teachers. The users learned how to use it by themselves and others learned by training of SSRU library. Most users used the online news clipping database one time per month at home and always used the service for general knowledge, up-to-date academic knowledge and assignment reference. Moreover, the results of using the online news clipping service problems include the users themselves, service management, service device- computer and tools – and the network, service provider, and publicity. This research would be benefit for librarians and teachers for planning and designing library services in their works and organization.

Keywords: online database, user behavior, news clipping, library services

Procedia PDF Downloads 286
25332 Generating Synthetic Chest X-ray Images for Improved COVID-19 Detection Using Generative Adversarial Networks

Authors: Muneeb Ullah, Daishihan, Xiadong Young

Abstract:

Deep learning plays a crucial role in identifying COVID-19 and preventing its spread. To improve the accuracy of COVID-19 diagnoses, it is important to have access to a sufficient number of training images of CXRs (chest X-rays) depicting the disease. However, there is currently a shortage of such images. To address this issue, this paper introduces COVID-19 GAN, a model that uses generative adversarial networks (GANs) to generate realistic CXR images of COVID-19, which can be used to train identification models. Initially, a generator model is created that uses digressive channels to generate images of CXR scans for COVID-19. To differentiate between real and fake disease images, an efficient discriminator is developed by combining the dense connectivity strategy and instance normalization. This approach makes use of their feature extraction capabilities on CXR hazy areas. Lastly, the deep regret gradient penalty technique is utilized to ensure stable training of the model. With the use of 4,062 grape leaf disease images, the Leaf GAN model successfully produces 8,124 COVID-19 CXR images. The COVID-19 GAN model produces COVID-19 CXR images that outperform DCGAN and WGAN in terms of the Fréchet inception distance. Experimental findings suggest that the COVID-19 GAN-generated CXR images possess noticeable haziness, offering a promising approach to address the limited training data available for COVID-19 model training. When the dataset was expanded, CNN-based classification models outperformed other models, yielding higher accuracy rates than those of the initial dataset and other augmentation techniques. Among these models, ImagNet exhibited the best recognition accuracy of 99.70% on the testing set. These findings suggest that the proposed augmentation method is a solution to address overfitting issues in disease identification and can enhance identification accuracy effectively.

Keywords: classification, deep learning, medical images, CXR, GAN.

Procedia PDF Downloads 57
25331 Analysis of Different Classification Techniques Using WEKA for Diabetic Disease

Authors: Usama Ahmed

Abstract:

Data mining is the process of analyze data which are used to predict helpful information. It is the field of research which solve various type of problem. In data mining, classification is an important technique to classify different kind of data. Diabetes is most common disease. This paper implements different classification technique using Waikato Environment for Knowledge Analysis (WEKA) on diabetes dataset and find which algorithm is suitable for working. The best classification algorithm based on diabetic data is Naïve Bayes. The accuracy of Naïve Bayes is 76.31% and take 0.06 seconds to build the model.

Keywords: data mining, classification, diabetes, WEKA

Procedia PDF Downloads 127
25330 Students’ Perceptions of Communication Design in Media: Case Study of Portuguese and Spanish Communication Students

Authors: Fátima Gonçalves, Joaquim Brigas, Jorge Gonçalves

Abstract:

The proliferation of mobile devices in society enables the media to disseminate information and knowledge more rapidly. Higher education students access these contents and share them with each other, in the most diverse platforms, allowing the ubiquity in access to information. This article presents the results and respective quantitative analysis of a survey applied to communication students of two higher education institutions: one in Portugal and another in Spain. The results show that, in this sample, higher education students regularly access news content believing traditional news sources to be more credible. Regarding online sources, it was verified that the access was mostly to free news contents. This study intends to promote the knowledge about the changes that occur in the relationship of higher education students with the media, characterizing how news consumption is processed by these students, considering the resulting effects of the digital media evolution. It is intended to present not only the news sources they use, but also to know some of their habits and relationship with the news media.

Keywords: students' perceptions, communication design, mass media, higher education, digital media

Procedia PDF Downloads 219
25329 Urban Land Cover from GF-2 Satellite Images Using Object Based and Neural Network Classifications

Authors: Lamyaa Gamal El-Deen Taha, Ashraf Sharawi

Abstract:

China launched satellite GF-2 in 2014. This study deals with comparing nearest neighbor object-based classification and neural network classification methods for classification of the fused GF-2 image. Firstly, rectification of GF-2 image was performed. Secondly, a comparison between nearest neighbor object-based classification and neural network classification for classification of fused GF-2 was performed. Thirdly, the overall accuracy of classification and kappa index were calculated. Results indicate that nearest neighbor object-based classification is better than neural network classification for urban mapping.

Keywords: GF-2 images, feature extraction-rectification, nearest neighbour object based classification, segmentation algorithms, neural network classification, multilayer perceptron

Procedia PDF Downloads 358
25328 A Generalized Weighted Loss for Support Vextor Classification and Multilayer Perceptron

Authors: Filippo Portera

Abstract:

Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the prediction, in case of a regression task. In the present, we present several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vextor Classification and a regression net for Multylayer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better.

Keywords: loss, binary-classification, MLP, weights, regression

Procedia PDF Downloads 69
25327 Stigmatising AIDS: A Content Analysis on HIV/AIDS-Related News Articles Published in Three Major Philippine Broadsheet

Authors: L. Dinco John Christian, C. Ramos Camille, C. Reyes Maria Eloisa

Abstract:

HIV/AIDS has been dubbed as one of the most stigmatised diseases of the recent century. Nelson Mandela pointed out that PLWHA (People Living With HIV/AIDS) are not killed by the disease, but by the stigma surrounding it. Despite the numerous studies on HIV/AIDS Stigmatisation globally, little is known about how evident and how powerful the media can be in framing the views of the readers when it comes to print in the Philippine context. This study dealt with a quantitative content analysis of HIV/AIDS-related news articles published by the top three broadsheets such as Philippine Daily Inquirer, Manila Bulletin and the Philippine Star in the span of one year. The HIV/AIDS-related news articles were collected and subjected to coding according to their tones, stigmatising statements/terminologies and news prominence. An analysis of the results had supported the researchers’ objectives (1) that there are different tones of HIV/AIDS-related news articles, (2) that there is a significant relation between the Stigmatizing Statements/Terminologies and the tone and that the (3) technical properties of HIV/AIDS related news articles determine the news prominence. Results revealed that despite the fact that the broadsheets were overtly reporting HIV/AIDS in Anti-Stigma-toned articles, they were covertly suggesting Stigma by the use of Stigmatising statements/terminologies present in it rather than plainly disseminating current medical knowledge about the transmission and treatments of the disease; the technical properties of the HIV/AIDS related news articles determined its prominence.

Keywords: HIV, AIDS, newspaper, content analysis

Procedia PDF Downloads 409
25326 Machine Learning for Feature Selection and Classification of Systemic Lupus Erythematosus

Authors: H. Zidoum, A. AlShareedah, S. Al Sawafi, A. Al-Ansari, B. Al Lawati

Abstract:

Systemic lupus erythematosus (SLE) is an autoimmune disease with genetic and environmental components. SLE is characterized by a wide variability of clinical manifestations and a course frequently subject to unpredictable flares. Despite recent progress in classification tools, the early diagnosis of SLE is still an unmet need for many patients. This study proposes an interpretable disease classification model that combines the high and efficient predictive performance of CatBoost and the model-agnostic interpretation tools of Shapley Additive exPlanations (SHAP). The CatBoost model was trained on a local cohort of 219 Omani patients with SLE as well as other control diseases. Furthermore, the SHAP library was used to generate individual explanations of the model's decisions as well as rank clinical features by contribution. Overall, we achieved an AUC score of 0.945, F1-score of 0.92 and identified four clinical features (alopecia, renal disorders, cutaneous lupus, and hemolytic anemia) along with the patient's age that was shown to have the greatest contribution on the prediction.

Keywords: feature selection, classification, systemic lupus erythematosus, model interpretation, SHAP, Catboost

Procedia PDF Downloads 60
25325 Arabic Text Representation and Classification Methods: Current State of the Art

Authors: Rami Ayadi, Mohsen Maraoui, Mounir Zrigui

Abstract:

In this paper, we have presented a brief current state of the art for Arabic text representation and classification methods. We decomposed Arabic Task Classification into four categories. First we describe some algorithms applied to classification on Arabic text. Secondly, we cite all major works when comparing classification algorithms applied on Arabic text, after this, we mention some authors who proposing new classification methods and finally we investigate the impact of preprocessing on Arabic TC.

Keywords: text classification, Arabic, impact of preprocessing, classification algorithms

Procedia PDF Downloads 440
25324 Enhancing Technical Trading Strategy on the Bitcoin Market using News Headlines and Language Models

Authors: Mohammad Hosein Panahi, Naser Yazdani

Abstract:

we present a technical trading strategy that leverages the FinBERT language model and financial news analysis with a focus on news related to a subset of Nasdaq 100 stocks. Our approach surpasses the baseline Range Break-out strategy in the Bitcoin market, yielding a remarkable 24.8% increase in the win ratio for all Friday trades and an impressive 48.9% surge in short trades specifically on Fridays. Moreover, we conduct rigorous hypothesis testing to establish the statistical significance of these improvements. Our findings underscore considerable potential of our NLP-driven approach in enhancing trading strategies and achieving greater profitability within financial markets.

Keywords: quantitative finance, technical analysis, bitcoin market, NLP, language models, FinBERT, technical trading

Procedia PDF Downloads 37
25323 System for the Detecting of Fake Profiles on Online Social Networks Using Machine Learning and the Bio-Inspired Algorithms

Authors: Sekkal Nawel, Mahammed Nadir

Abstract:

The proliferation of online activities on Online Social Networks (OSNs) has captured significant user attention. However, this growth has been hindered by the emergence of fraudulent accounts that do not represent real individuals and violate privacy regulations within social network communities. Consequently, it is imperative to identify and remove these profiles to enhance the security of OSN users. In recent years, researchers have turned to machine learning (ML) to develop strategies and methods to tackle this issue. Numerous studies have been conducted in this field to compare various ML-based techniques. However, the existing literature still lacks a comprehensive examination, especially considering different OSN platforms. Additionally, the utilization of bio-inspired algorithms has been largely overlooked. Our study conducts an extensive comparison analysis of various fake profile detection techniques in online social networks. The results of our study indicate that supervised models, along with other machine learning techniques, as well as unsupervised models, are effective for detecting false profiles in social media. To achieve optimal results, we have incorporated six bio-inspired algorithms to enhance the performance of fake profile identification results.

Keywords: machine learning, bio-inspired algorithm, detection, fake profile, system, social network

Procedia PDF Downloads 43
25322 A Narrative of Nationalism in Mainstream Media: The US, China, and COVID-19

Authors: Rachel Williams, Shiqi Yang

Abstract:

Our research explores the influence nationalism has had on media coverage of the COVID-19 pandemic as it relates to China in the United States through an inclusive qualitative analysis of two US news networks, Fox News and CNN. In total, the transcripts of sixteen videos uploaded on YouTube, each with more than 100,000 views, were gathered for data processing. Co-occurrence networks generated by KH Coder illuminate the themes and narratives underpinning the reports from Fox News and CNN. The results of in-depth content analysis with keywords suggest that the pandemic has been framed in an ethnopopulist nationalist manner, although to varying degrees between networks. Specifically, the authors found that Fox News is more likely to report hypotheses or statements as a fact; on the contrary, CNN is more likely to quote data and statements from official institutions. Future research into how nationalist narratives have developed in China and in other US news coverage with a more systematic and quantitative method can be conducted to expand on these findings.

Keywords: nationalism, media studies, us and china, COVID-19, social media, communication studies

Procedia PDF Downloads 34