Search results for: Arabic text classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3586

Search results for: Arabic text classification

3226 Sentiment Classification Using Enhanced Contextual Valence Shifters

Authors: Vo Ngoc Phu, Phan Thi Tuoi

Abstract:

We have explored different methods of improving the accuracy of sentiment classification. The sentiment orientation of a document can be positive (+), negative (-), or neutral (0). We combine five dictionaries from [2, 3, 4, 5, 6] into the new one with 21137 entries. The new dictionary has many verbs, adverbs, phrases and idioms, that are not in five ones before. The paper shows that our proposed method based on the combination of Term-Counting method and Enhanced Contextual Valence Shifters method has improved the accuracy of sentiment classification. The combined method has accuracy 68.984% on the testing dataset, and 69.224% on the training dataset. All of these methods are implemented to classify the reviews based on our new dictionary and the Internet Movie data set.

Keywords: sentiment classification, sentiment orientation, valence shifters, contextual, valence shifters, term counting

Procedia PDF Downloads 475
3225 The Voiceless Dental- Alveolar Common Augment in Arabic and Other Semitic Languages, a Morphophonemic Comparison

Authors: Tarek Soliman Mostafa Soliman Al-Nana'i

Abstract:

There are non-steady voiced augments in the Semitic languages, and in the morphological and structural augmentation, two sounds were augments in all Semitic languages at the level of the spoken language and two letters at the level of the written language, which are the hamza and the ta’. This research studies only the second of them; Therefore, we defined it as “The Voiceless Dental- alveolar common augment” (VDACA) to distinguish it from the glottal sound “Hamza”, first, middle, or last, in a noun or in a verb, in Arabic and its equivalent in the Semitic languages. What is meant by “VDACA” is the ta’ that is in addition to the root of the word at the morphological level: the word “voiceless” takes out the voiced sounds that we studied before, and the “dental- alveolar common augment” takes out the laryngeal sound of them, which is the “Hamza”: and the word “common” brings out the uncommon voiceless sounds, which are sīn, shīn, and hā’. The study is limited to the ta' alone among the Arabic sounds, and this title faced a problem in identifying it with the ta'. Because the designation of the ta is not the same in most Semitic languages. Hebrew, for example, has “tav” and is pronounced with the voiced fa (v), which is not in Arabic. It is called different names in other Semitic languages, such as “taw” or “tAu” in old Syriac. And so on. This goes hand in hand with the insistence on distance from the written level and the reference to the phonetic aspect in this study that is closely and closely linked to the morphological level. Therefore, the study is “morphophonemic”. What is meant by Semitic languages in this study are the following: Akkadian, Ugaritic, Hebrew, Syriac, Mandaean, Ge'ez, and Amharic. The problem of the study is the agreement or difference between these languages in the position of that augment, first, middle, or last. And in determining the distinguishing characteristics of each language from the other. As for the study methodology, it is determined by the comparative approach in Semitic languages, which is based on the descriptive approach for each language. The study is divided into an introduction, four sections, and a conclusion: Introduction: It included the subject of the study, its importance, motives, problem, methodology, and division. The first section: VDACA as a non-common phoneme. The second: VDACA as a common phoneme. The third: VDACA as a functional morpheme. The fourth section: Commentary and conclusion with the most important results. The positions of VDACA in Arabic and other Semitic languages, and in nouns and verbs, were limited to first, middle, and last. The research identified the individual addition, which is common with other augments, and the research proved that this augmentation is constant in all Semitic languages, but there are characteristics that distinguish each language from the other.

Keywords: voiceless -, dental- alveolar, augment, Arabic - semitic languages

Procedia PDF Downloads 38
3224 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 106
3223 Analyse of User Interface Design in Mobile Teaching Apps

Authors: Asma Ashoul

Abstract:

Nowadays, smartphones are playing a major role in our lives, by communicating with family, friends or using them to learn different things in life. Using smartphones to learn and teach today is something common to see in places like schools or colleges. Therefore, thinking about developing an app that teaches Arabic language may help some categories in society to learn a second language. For example, kids under the age of five or older would learn fast by using smartphones. The problem is based on the Arabic language, which is most like to be not used anymore. The developer assumed to develop an app that would help the younger generation on their learning the Arabic language. A research was completed about user interface design to help the developer choose appropriate layouts and designs. Developing the artefact contained different stages. First, analyzing the requirements with the client, which is needed to be developed. Secondly, designing the user interface design based on the literature review. Thirdly, developing and testing the application after it is completed contacting all the tools that have been used. Lastly, evaluation and future recommendation, which contained the overall view about the application followed by the client’s feedback. Gathering the requirements after having client meetings based on the interface design. The project was done following an agile development methodology. Therefore, this methodology helped the developer to manage to finish the work on time.

Keywords: developer, application, interface design, layout, Agile, client

Procedia PDF Downloads 87
3222 Improved Processing Speed for Text Watermarking Algorithm in Color Images

Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari

Abstract:

Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.

Keywords: steganography, watermarking, time complexity measurements, private keys

Procedia PDF Downloads 118
3221 Detection and Classification of Mammogram Images Using Principle Component Analysis and Lazy Classifiers

Authors: Rajkumar Kolangarakandy

Abstract:

Feature extraction and selection is the primary part of any mammogram classification algorithms. The choice of feature, attribute or measurements have an important influence in any classification system. Discrete Wavelet Transformation (DWT) coefficients are one of the prominent features for representing images in frequency domain. The features obtained after the decomposition of the mammogram images using wavelet transformations have higher dimension. Even though the features are higher in dimension, they were highly correlated and redundant in nature. The dimensionality reduction techniques play an important role in selecting the optimum number of features from the higher dimension data, which are highly correlated. PCA is a mathematical tool that reduces the dimensionality of the data while retaining most of the variation in the dataset. In this paper, a multilevel classification of mammogram images using reduced discrete wavelet transformation coefficients and lazy classifiers is proposed. The classification is accomplished in two different levels. In the first level, mammogram ROIs extracted from the dataset is classified as normal and abnormal types. In the second level, all the abnormal mammogram ROIs is classified into benign and malignant too. A further classification is also accomplished based on the variation in structure and intensity distribution of the images in the dataset. The Lazy classifiers called Kstar, IBL and LWL are used for classification. The classification results obtained with the reduced feature set is highly promising and the result is also compared with the performance obtained without dimension reduction.

Keywords: PCA, wavelet transformation, lazy classifiers, Kstar, IBL, LWL

Procedia PDF Downloads 313
3220 Characterization, Classification and Fertility Capability Classification of Three Rice Zones of Ebonyi State, Southeastern Nigeria

Authors: Sunday Nathaniel Obasi, Chiamak Chinasa Obasi

Abstract:

Soil characterization and classification provide the basic information necessary to create a functional evaluation and soil classification schemes. Fertility capability classification (FCC) on the other hand is a technical system that groups the soils according to kinds of problems they present for management of soil physical and chemical properties. This research was carried out in Ebonyi state, southeastern Nigeria, which is an agrarian state and a leading rice producing part of southeastern Nigeria. In order to maximize the soil and enhance the productivity of rice in Ebonyi soils, soil classification, and fertility classification information need to be supplied. The state was grouped into three locations according to their agricultural zones namely; Ebonyi north, Ebonyi central and Ebonyi south representing Abakaliki, Ikwo and Ivo locations respectively. Major rice growing areas of the soils were located and two profile pits were sunk in each of the studied zones from which soils were characterized, classified and fertility capability classification (FCC) developed. Soil classification was done using United State Department of Agriculture (USDA) Soil Taxonomy and correlated with World Reference Base for soil resources. Results obtained classified Abakaliki 1 and Abakaliki 2 as Typic Fluvaquents (Ochric Fluvisols). Ikwo 1 was classified as Vertic Eutrudepts (Eutric Vertisols) while Ikwo 2 was classified as Typic Eutrudepts (Eutric Cambisols). Ivo 1 and Ivo 2 were both classified as Aquic Eutrudepts (Gleyic Leptosols). Fertility capability classification (FCC) revealed that all studied soils had mostly loamy topsoils and subsoils except Ikwo 1 with clayey topsoil. Limitations encountered in the studied soils include; dryness (d), low ECEC (e), low nutrient capital reserve (k) and water logging/ anaerobic condition (gley). Thus, FCC classifications were Ldek for Abakaliki 1 and 2, Ckv for Ikwo 1, LCk for Ikwo 2 while Ivo 1 and 2 were Legk and Lgk respectively.

Keywords: soil classification, soil fertility, limitations, modifiers, Southeastern Nigeria

Procedia PDF Downloads 108
3219 Land Cover Classification Using Sentinel-2 Image Data and Random Forest Algorithm

Authors: Thanh Noi Phan, Martin Kappas, Jan Degener

Abstract:

The currently launched Sentinel 2 (S2) satellite (June, 2015) bring a great potential and opportunities for land use/cover map applications, due to its fine spatial resolution multispectral as well as high temporal resolutions. So far, there are handful studies using S2 real data for land cover classification. Especially in northern Vietnam, to our best knowledge, there exist no studies using S2 data for land cover map application. The aim of this study is to provide the preliminary result of land cover classification using Sentinel -2 data with a rising state – of – art classifier, Random Forest. A case study with heterogeneous land use/cover in the eastern of Hanoi Capital – Vietnam was chosen for this study. All 10 spectral bands of 10 and 20 m pixel size of S2 images were used, the 10 m bands were resampled to 20 m. Among several classified algorithms, supervised Random Forest classifier (RF) was applied because it was reported as one of the most accuracy methods of satellite image classification. The results showed that the red-edge and shortwave infrared (SWIR) bands play an important role in land cover classified results. A very high overall accuracy above 90% of classification results was achieved.

Keywords: classify algorithm, classification, land cover, random forest, sentinel 2, Vietnam

Procedia PDF Downloads 348
3218 Classification of Cochannel Signals Using Cyclostationary Signal Processing and Deep Learning

Authors: Bryan Crompton, Daniel Giger, Tanay Mehta, Apurva Mody

Abstract:

The task of classifying radio frequency (RF) signals has seen recent success in employing deep neural network models. In this work, we present a combined signal processing and machine learning approach to signal classification for cochannel anomalous signals. The power spectral density and cyclostationary signal processing features of a captured signal are computed and fed into a neural net to produce a classification decision. Our combined signal preprocessing and machine learning approach allows for simpler neural networks with fast training times and small computational resource requirements for inference with longer preprocessing time.

Keywords: signal processing, machine learning, cyclostationary signal processing, signal classification

Procedia PDF Downloads 69
3217 Degemination in Emirati Pidgin Arabic: A Sociolinguistic Perspective

Authors: Abdel Rahman Mitib Altakhaineh, Abdul Salam Mohamad Alnamer, Sulafah Abdul Salam Alnamer

Abstract:

This study examines the production of gemination in Emirati Pidgin Arabic (EPA) spoken by blue-collar workers in the United Arab Emirates (UAE). A simple naming test was designed to test the production of geminates and a follow-up discussion was conducted with some of the participants to obtain the complementary qualitative analysis. The goal of the test was to determine whether the EPA speakers would produce a geminated or degeminated phoneme. A semi-structured interview was conducted with a subset of the study cohort to obtain participants’ own explanation where they degeminated the consonants. Our findings suggest that the exercising of this choice functions as a sociolinguistic strategy in a similar manner to that observed by Labov in his study of Martha’s Vineyard. The findings also show that speakers of EPA are inclined to degeminate consonantal geminates to establish themselves as members of a particular social group. Reasons for wanting to achieve this aim were given as: to claim privileges only available to members of this group (such as employment) and to distinguish themselves from the dominant cultural group. The study concludes that degemination in EPA has developed into a sociolinguistic solidarity marker.

Keywords: sociolinguistics, morphophonology, degemination, solidarity, Emirati pidgin Arabic

Procedia PDF Downloads 182
3216 Using Data Mining Technique for Scholarship Disbursement

Authors: J. K. Alhassan, S. A. Lawal

Abstract:

This work is on decision tree-based classification for the disbursement of scholarship. Tree-based data mining classification technique is used in other to determine the generic rule to be used to disburse the scholarship. The system based on the defined rules from the tree is able to determine the class (status) to which an applicant shall belong whether Granted or Not Granted. The applicants that fall to the class of granted denote a successful acquirement of scholarship while those in not granted class are unsuccessful in the scheme. An algorithm that can be used to classify the applicants based on the rules from tree-based classification was also developed. The tree-based classification is adopted because of its efficiency, effectiveness, and easy to comprehend features. The system was tested with the data of National Information Technology Development Agency (NITDA) Abuja, a Parastatal of Federal Ministry of Communication Technology that is mandated to develop and regulate information technology in Nigeria. The system was found working according to the specification. It is therefore recommended for all scholarship disbursement organizations.

Keywords: classification, data mining, decision tree, scholarship

Procedia PDF Downloads 344
3215 Synthetic Aperture Radar Remote Sensing Classification Using the Bag of Visual Words Model to Land Cover Studies

Authors: Reza Mohammadi, Mahmod R. Sahebi, Mehrnoosh Omati, Milad Vahidi

Abstract:

Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images.

Keywords: Bag of Visual Words (BOVW), classification, feature extraction, land cover management, Polarimetric Synthetic Aperture Radar (PolSAR)

Procedia PDF Downloads 177
3214 Novel Inference Algorithm for Gaussian Process Classification Model with Multiclass and Its Application to Human Action Classification

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

In this paper, we propose a novel inference algorithm for the multi-class Gaussian process classification model that can be used in the field of human behavior recognition. This algorithm can drive simultaneously both a posterior distribution of a latent function and estimators of hyper-parameters in a Gaussian process classification model with multi-class. Our algorithm is based on the Laplace approximation (LA) technique and variational EM framework. This is performed in two steps: called expectation and maximization steps. First, in the expectation step, using the Bayesian formula and LA technique, we derive approximately the posterior distribution of the latent function indicating the possibility that each observation belongs to a certain class in the Gaussian process classification model. Second, in the maximization step, using a derived posterior distribution of latent function, we compute the maximum likelihood estimator for hyper-parameters of a covariance matrix necessary to define prior distribution for latent function. These two steps iteratively repeat until a convergence condition satisfies. Moreover, we apply the proposed algorithm with human action classification problem using a public database, namely, the KTH human action data set. Experimental results reveal that the proposed algorithm shows good performance on this data set.

Keywords: bayesian rule, gaussian process classification model with multiclass, gaussian process prior, human action classification, laplace approximation, variational EM algorithm

Procedia PDF Downloads 304
3213 Determining a Bilingualism Index: Evidence From Lebanese Control Bilinguals

Authors: Rania Kassir, Christophe Dos Santos, Halim Abboud, Olivier Godefroy

Abstract:

The ability to communicate in at least two different languages is shared by a growing number of humans. Recently, many researchers have been studying the elderly bilingual population around the world in neuroscience, and yet, until today there’s no accurate nor universal measure or methodology used to examine bilingualism across these studies which constitute a real challenge for results generalization. This study contributes to the quest of a multidimensional bilingualism index and language proficiency literature by investigating a new bilingualism index from a reliable subjective questionnaire the Language Experience and Proficiency Questionnaire (LEAP-Q), multi-linguistic tests, and a diverse bilingual population all featured in one analysis and one index. One hundred Lebanese subjects aged between 55 and 92 years old divided into three different bilingualism subgroups (Arabic prominent, balanced, and French prominent) were recruited and underwent the LEAP-Q with a set of linguistic and cognitive tests. The analysis of the collected data led to the creation of a robust bilingualism index from speaking and oral understanding scores that underline specifically bilingualism subtype according to cutoffs scored. The practice implications of this index, particularly its use within bilingual populations, are addressed in the conclusion of this work.

Keywords: bilingualism, language dominance, bilingualism index, balanced bilingualism, Arabic first language, Lebanese, Arabic-French bilingualism

Procedia PDF Downloads 101
3212 Evaluating 8D Reports Using Text-Mining

Authors: Benjamin Kuester, Bjoern Eilert, Malte Stonis, Ludger Overmeyer

Abstract:

Increasing quality requirements make reliable and effective quality management indispensable. This includes the complaint handling in which the 8D method is widely used. The 8D report as a written documentation of the 8D method is one of the key quality documents as it internally secures the quality standards and acts as a communication medium to the customer. In practice, however, the 8D report is mostly faulty and of poor quality. There is no quality control of 8D reports today. This paper describes the use of natural language processing for the automated evaluation of 8D reports. Based on semantic analysis and text-mining algorithms the presented system is able to uncover content and formal quality deficiencies and thus increases the quality of the complaint processing in the long term.

Keywords: 8D report, complaint management, evaluation system, text-mining

Procedia PDF Downloads 278
3211 Polarimetric Synthetic Aperture Radar Data Classification Using Support Vector Machine and Mahalanobis Distance

Authors: Najoua El Hajjaji El Idrissi, Necip Gokhan Kasapoglu

Abstract:

Polarimetric Synthetic Aperture Radar-based imaging is a powerful technique used for earth observation and classification of surfaces. Forest evolution has been one of the vital areas of attention for the remote sensing experts. The information about forest areas can be achieved by remote sensing, whether by using active radars or optical instruments. However, due to several weather constraints, such as cloud cover, limited information can be recovered using optical data and for that reason, Polarimetric Synthetic Aperture Radar (PolSAR) is used as a powerful tool for forestry inventory. In this [14paper, we applied support vector machine (SVM) and Mahalanobis distance to the fully polarimetric AIRSAR P, L, C-bands data from the Nezer forest areas, the classification is based in the separation of different tree ages. The classification results were evaluated and the results show that the SVM performs better than the Mahalanobis distance and SVM achieves approximately 75% accuracy. This result proves that SVM classification can be used as a useful method to evaluate fully polarimetric SAR data with sufficient value of accuracy.

Keywords: classification, synthetic aperture radar, SAR polarimetry, support vector machine, mahalanobis distance

Procedia PDF Downloads 101
3210 Towards Learning Query Expansion

Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier

Abstract:

The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.

Keywords: supervised leaning, classification, query expansion, association rules

Procedia PDF Downloads 300
3209 Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms

Authors: Shaik Ayesha Fathima, Shaik Noor Jahan, Duvvada Rajeswara Rao

Abstract:

Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments.

Keywords: area calculation, atrous convolution, deep globe land cover classification, deepLabv3, land cover classification, resnet 50

Procedia PDF Downloads 114
3208 Classification of Opaque Exterior Walls of Buildings from a Sustainable Point of View

Authors: Michelle Sánchez de León Brajkovich, Nuria Martí Audi

Abstract:

The envelope is one of the most important elements when one analyzes the operation of the building in terms of sustainability. Taking this into consideration, this research focuses on setting a classification system of the envelopes opaque systems, crossing the knowledge and parameters of construction systems with requirements in terms of sustainability that they may have, to have a better understanding of how these systems work with respect to their sustainable contribution to the building. Therefore, this paper evaluates the importance of the envelope design on the building sustainability. It analyses the parameters that make the construction systems behave differently in terms of sustainability. At the same time it explains the classification process generated from this analysis that results in a classification where all opaque vertical envelope construction systems enter.

Keywords: sustainable, exterior walls, envelope, facades, construction systems, energy efficiency

Procedia PDF Downloads 538
3207 Multi-Classification Deep Learning Model for Diagnosing Different Chest Diseases

Authors: Bandhan Dey, Muhsina Bintoon Yiasha, Gulam Sulaman Choudhury

Abstract:

Chest disease is one of the most problematic ailments in our regular life. There are many known chest diseases out there. Diagnosing them correctly plays a vital role in the process of treatment. There are many methods available explicitly developed for different chest diseases. But the most common approach for diagnosing these diseases is through X-ray. In this paper, we proposed a multi-classification deep learning model for diagnosing COVID-19, lung cancer, pneumonia, tuberculosis, and atelectasis from chest X-rays. In the present work, we used the transfer learning method for better accuracy and fast training phase. The performance of three architectures is considered: InceptionV3, VGG-16, and VGG-19. We evaluated these deep learning architectures using public digital chest x-ray datasets with six classes (i.e., COVID-19, lung cancer, pneumonia, tuberculosis, atelectasis, and normal). The experiments are conducted on six-classification, and we found that VGG16 outperforms other proposed models with an accuracy of 95%.

Keywords: deep learning, image classification, X-ray images, Tensorflow, Keras, chest diseases, convolutional neural networks, multi-classification

Procedia PDF Downloads 59
3206 Communicating Meaning through Translanguaging: The Case of Multilingual Interactions of Algerians on Facebook

Authors: F. Abdelhamid

Abstract:

Algeria is a multilingual speech community where individuals constantly mix between codes in spoken discourse. Code is used as a cover term to refer to the existing languages and language varieties which include, among others, the mother tongue of the majority Algerian Arabic, the official language Modern Standard Arabic and the foreign languages French and English. The present study explores whether Algerians mix between these codes in online communication as well. Facebook is the selected platform from which data is collected because it is the preferred social media site for most Algerians and it is the most used one. Adopting the notion of translanguaging, this study attempts explaining how users of Facebook use multilingual messages to communicate meaning. Accordingly, multilingual interactions are not approached from a pejorative perspective but rather as a creative linguistic behavior that multilingual utilize to achieve intended meanings. The study is intended as a contribution to the research on multilingualism online because although an extensive literature has investigated multilingualism in spoken discourse, limited research investigated it in the online one. Its aim is two-fold. First, it aims at ensuring that the selected platform for analysis, namely Facebook, could be a source for multilingual data to enable the qualitative analysis. This is done by measuring frequency rates of multilingual instances. Second, when enough multilingual instances are encountered, it aims at describing and interpreting some selected ones. 120 posts and 16335 comments were collected from two Facebook pages. Analysis revealed that third of the collected data are multilingual messages. Users of Facebook mixed between the four mentioned codes in writing their messages. The most frequent cases are mixing between Algerian Arabic and French and between Algerian Arabic and Modern Standard Arabic. A focused qualitative analysis followed where some examples are interpreted and explained. It seems that Algerians mix between codes when communicating online despite the fact that it is a conscious type of communication. This suggests that such behavior is not a random and corrupted way of communicating but rather an intentional and natural one.

Keywords: Algerian speech community, computer mediated communication, languages in contact, multilingualism, translanguaging

Procedia PDF Downloads 102
3205 Enframing the Smart City: Utilizing Heidegger's 'The Question Concerning Technology' as a Framework to Interpret Smart Urbanism

Authors: Will Brown

Abstract:

Martin Heidegger is considered to be one of the leading philosophical lights of the 20th century with his lecture/essay 'The Question Concerning Technology' proving to be an invaluable text in the study of technology and the understanding of how technology influences the world it is set upon. However, this text has not as of yet been applied to the rapid rise and proliferation of ‘smart’ cities. This article is premised upon the application of the aforementioned text and the smart city in order to provide a fresh, if not critical analysis and interpretation of this phenomena. The first section below provides a brief literature review of smart urbanism in order to lay the groundwork necessary to apply Heidegger’s work to the smart city, from which a framework is developed to interpret the infusion of digital sensing technologies and the urban milieu. This framework is comprised of four concepts put forward in Heidegger’s text: circumscribing, bringing-forth, challenging, and standing-reserve. A concluding chapter is based upon the notion of enframement, arguing that once the rubric of data collection is placed within the urban system, future systems will require the capability to harvest data, resulting in an ever-renewing smart city.

Keywords: air quality sensing, big data, Martin Heidegger, smart city

Procedia PDF Downloads 176
3204 Performance Evaluation of Contemporary Classifiers for Automatic Detection of Epileptic EEG

Authors: K. E. Ch. Vidyasagar, M. Moghavvemi, T. S. S. T. Prabhat

Abstract:

Epilepsy is a global problem, and with seizures eluding even the smartest of diagnoses a requirement for automatic detection of the same using electroencephalogram (EEG) would have a huge impact in diagnosis of the disorder. Among a multitude of methods for automatic epilepsy detection, one should find the best method out, based on accuracy, for classification. This paper reasons out, and rationalizes, the best methods for classification. Accuracy is based on the classifier, and thus this paper discusses classifiers like quadratic discriminant analysis (QDA), classification and regression tree (CART), support vector machine (SVM), naive Bayes classifier (NBC), linear discriminant analysis (LDA), K-nearest neighbor (KNN) and artificial neural networks (ANN). Results show that ANN is the most accurate of all the above stated classifiers with 97.7% accuracy, 97.25% specificity and 98.28% sensitivity in its merit. This is followed closely by SVM with 1% variation in result. These results would certainly help researchers choose the best classifier for detection of epilepsy.

Keywords: classification, seizure, KNN, SVM, LDA, ANN, epilepsy

Procedia PDF Downloads 489
3203 The Impact of Developing an Educational Unit in the Light of Twenty-First Century Skills in Developing Language Skills for Non-Arabic Speakers: A Proposed Program for Application to Students of Educational Series in Regular Schools

Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla

Abstract:

The era of the knowledge explosion in which we live requires us to develop educational curricula quantitatively and qualitatively to adapt to the twenty-first-century skills of critical thinking, problem-solving, communication, cooperation, creativity, and innovation. The process of developing the curriculum is as significant as building it; in fact, the development of curricula may be more difficult than building them. And curriculum development includes analyzing needs, setting goals, designing the content and educational materials, creating language programs, developing teachers, applying for programmes in schools, monitoring and feedback, and then evaluating the language programme resulting from these processes. When we look back at the history of language teaching during the twentieth century, we find that developing the delivery method is the most crucial aspect of change in language teaching doctrines. The concept of delivery method in teaching is a systematic set of teaching practices based on a specific theory of language acquisition. This is a key consideration, as the process of development must include all the curriculum elements in its comprehensive sense: linguistically and non-linguistically. The various Arabic curricula provide the student with a set of units, each unit consisting of a set of linguistic elements. These elements are often not logically arranged, and more importantly, they neglect essential points and highlight other less important ones. Moreover, the educational curricula entail a great deal of monotony in the presentation of content, which makes it hard for the teacher to select adequate content; so that the teacher often navigates among diverse references to prepare a lesson and hardly finds the suitable one. Similarly, the student often gets bored when learning the Arabic language and fails to fulfill considerable progress in it. Therefore, the problem is not related to the lack of curricula, but the problem is the development of the curriculum with all its linguistic and non-linguistic elements in accordance with contemporary challenges and standards for teaching foreign languages. The Arabic library suffers from a lack of references for curriculum development. In this paper, the researcher investigates the elements of development, such as the teacher, content, methods, objectives, evaluation, and activities. Hence, a set of general guidelines in the field of educational development were reached. The paper highlights the need to identify weaknesses in educational curricula, decide the twenty-first-century skills that must be employed in Arabic education curricula, and the employment of foreign language teaching standards in current Arabic Curricula. The researcher assumes that the series of teaching Arabic to speakers of other languages in regular schools do not address the skills of the twenty-first century, which is what the researcher tries to apply in the proposed unit. The experimental method is the method of this study. It is based on two groups: experimental and control. The development of an educational unit will help build suitable educational series for students of the Arabic language in regular schools, in which twenty-first-century skills and standards for teaching foreign languages will be addressed and be more useful and attractive to students.

Keywords: curriculum, development, Arabic language, non-native, skills

Procedia PDF Downloads 46
3202 3D Receiver Operator Characteristic Histogram

Authors: Xiaoli Zhang, Xiongfei Li, Yuncong Feng

Abstract:

ROC curves, as a widely used evaluating tool in machine learning field, are the tradeoff of true positive rate and negative rate. However, they are blamed for ignoring some vital information in the evaluation process, such as the amount of information about the target that each instance carries, predicted score given by each classification model to each instance. Hence, in this paper, a new classification performance method is proposed by extending the Receiver Operator Characteristic (ROC) curves to 3D space, which is denoted as 3D ROC Histogram. In the histogram, the

Keywords: classification, performance evaluation, receiver operating characteristic histogram, hardness prediction

Procedia PDF Downloads 289
3201 Role of Natural Language Processing in Information Retrieval; Challenges and Opportunities

Authors: Khaled M. Alhawiti

Abstract:

This paper aims to analyze the role of natural language processing (NLP). The paper will discuss the role in the context of automated data retrieval, automated question answer, and text structuring. NLP techniques are gaining wider acceptance in real life applications and industrial concerns. There are various complexities involved in processing the text of natural language that could satisfy the need of decision makers. This paper begins with the description of the qualities of NLP practices. The paper then focuses on the challenges in natural language processing. The paper also discusses major techniques of NLP. The last section describes opportunities and challenges for future research.

Keywords: data retrieval, information retrieval, natural language processing, text structuring

Procedia PDF Downloads 312
3200 Combined Odd Pair Autoregressive Coefficients for Epileptic EEG Signals Classification by Radial Basis Function Neural Network

Authors: Boukari Nassim

Abstract:

This paper describes the use of odd pair autoregressive coefficients (Yule _Walker and Burg) for the feature extraction of electroencephalogram (EEG) signals. In the classification: the radial basis function neural network neural network (RBFNN) is employed. The RBFNN is described by his architecture and his characteristics: as the RBF is defined by the spread which is modified for improving the results of the classification. Five types of EEG signals are defined for this work: Set A, Set B for normal signals, Set C, Set D for interictal signals, set E for ictal signal (we can found that in Bonn university). In outputs, two classes are given (AC, AD, AE, BC, BD, BE, CE, DE), the best accuracy is calculated at 99% for the combined odd pair autoregressive coefficients. Our method is very effective for the diagnosis of epileptic EEG signals.

Keywords: epilepsy, EEG signals classification, combined odd pair autoregressive coefficients, radial basis function neural network

Procedia PDF Downloads 321
3199 Procedures and Strategies in Translation: Two Marathi Translations of Train to Pakistan by Khushwant Singh

Authors: Manoj Gujar

Abstract:

The present paper is an attempt to interpret two Marathi translations of Khushwant Singh’s (1915-2014) novel Train to Pakistan (1956). The 20th century was branded as an era of Liberalization, Privatization and Globalization. Different countries and cultures have enunciated interaction with one another in an unprecedented manner. The world is becoming multilingual and multicultural. The democratic countries such as the U.S.A., the U.K., and India have become pivotal centers of interlingual and cross-cultural exchange. People belonging to different nationalities showed keen interest in knowing the characteristic features of different languages and of their cultures. Here, ‘Translation’ plays an important role in such multilingual and multicultural contexts. Translation is not only translation of a language but a translation of a culture. However, in the act of translation a translator makes use of such procedures as borrowing, definition, literal translation, substitution, lexical creation, omission, addition as well as their various combinations. To him, a text produced in one linguistic and cultural context can reach other linguistic and cultural contexts through these processes of translation. A worthy work of art appeals many readers. India, being a multilingual country we find that there goes multiple translations of the same text in different Indian languages. But sometimes, if can be found that a same text appeals to different ages and the same text gets translated into the same language by the two or more authors. In this reference, the present paper is an attempt to study how different translations of the same text differ in terms of procedures and strategies during the process of the translation of culture. The source text is Khushwant Singh’s historical novel Train to Pakistan (1956). The novel was widely appreciated and so translated into different regional languages in India. The novel has two Marathi translations: Agniratha (1972) by Hidayatkhan and Train to Pakistan (1980) by Anil Kinikar. This paper is an attempt to evaluate the strategies and procedures in translation to analyze these two Marathi translations. Hidayat Khan made a lot of omissions of the significant details and distorted the original text to a large extent, whereas, Anil Kinikar has done justice to the Source Text by rendering it in Marathi as faithfully as possible.

Keywords: culture, multilingual, procedures and strategies, translation

Procedia PDF Downloads 346
3198 Unraveling the Threads of Madness: Henry Russell’s 'The Maniac' as an Advocate for Deinstitutionalization in the Nineteenth Century

Authors: T. J. Laws-Nicola

Abstract:

Henry Russell was best known as a composer of more than 300 songs. Many of his compositions were popular for both their sentimental texts, as in ‘The Old Armchair,’ and those of a more political nature, such as ‘Woodsman, Spare That Tree!’ Indeed, Russell had written such songs of advocacy as those associated with abolitionism (‘The Slave Ship’) and environmentalism (‘Woodsman, Spare that Tree!’). ‘The Maniac’ is his only composition addressing the issue of institutionalization. The text is borrowed and adapted from the monodrama The Captive by M.G. ‘Monk’ Lewis. Through an analysis of form, harmony, melody, text, and thematic development and interactions between text and music we can approach a clearer understanding of ‘The Maniac’ and how the text and music interact. Select periodicals, such as The London Times, provide contemporary critical review for ‘The Maniac.’ Additional nineteenth century songs whose texts focus on madness and/or institutionalization will assist in building a stylistic and cultural context for ‘The Maniac.’ Through comparative analyses of ‘The Maniac’ with a body of songs that focus on similar topics, we can approach a clear understanding of the song as a vehicle for deinstitutionalization.

Keywords: 19th century song, institutionalization, M. G. Lewis, Henry Russell

Procedia PDF Downloads 503
3197 A Tool to Measure the Usability Guidelines for Arab E-Government Websites

Authors: Omyma Alosaimi, Asma Alsumait

Abstract:

The website developer and designer should follow usability guidelines to provide a user-friendly interface. Using tools to measure usability, the evaluator can evaluate automatically hundreds of links within few minutes. It has the advantage of detecting some violations that only machines can detect. For that using usability evaluating tool is important to find as many violations as possible. There are many websites usability testing tools, but none is developed to measure the usability of e-government website nor Arabic e-government websites. To measure the usability of the Arabic e-government websites, a tool is developed and tested in this paper. A comparison of using a tool specifically developed for e-government websites and general usability testing tool is presented.

Keywords: e-government, human computer interaction, usability evaluation, usability guidelines

Procedia PDF Downloads 394