Search results for: style classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2936

Search results for: style classification

2456 Application of Fuzzy Clustering on Classification Agile Supply Chain

Authors: Hamidreza Fallah Lajimi , Elham Karami, Fatemeh Ali nasab, Mostafa Mahdavikia

Abstract:

Being responsive is an increasingly important skill for firms in today’s global economy; thus firms must be agile. Naturally, it follows that an organization’s agility depends on its supply chain being agile. However, achieving supply chain agility is a function of other abilities within the organization. This paper analyses results from a survey of 71 Iran manufacturing companies in order to identify some of the factors for agile organizations in managing their supply chains. Then we classification this company in four cluster with fuzzy c-mean technique and with four validations functional determine automatically the optimal number of clusters.

Keywords: agile supply chain, clustering, fuzzy clustering

Procedia PDF Downloads 454
2455 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada

Authors: Bilel Chalghaf, Mathieu Varin

Abstract:

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.

Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR

Procedia PDF Downloads 119
2454 The Gut Microbiome in Cirrhosis and Hepatocellular Carcinoma: Characterization of Disease-Related Microbial Signature and the Possible Impact of Life Style and Nutrition

Authors: Lena Lapidot, Amir Amnon, Rita Nosenko, Veitsman Ella, Cohen-Ezra Oranit, Davidov Yana, Segev Shlomo, Koren Omry, Safran Michal, Ben-Ari Ziv

Abstract:

Introduction: Hepatocellular carcinoma (HCC) is one of the leading causes of cancer related mortality worldwide. Liver Cirrhosis is the main predisposing risk factor for the development of HCC. The factor(s) influencing disease progression from Cirrhosis to HCC remain unknown. Gut microbiota has recently emerged as a major player in different liver diseases, however its association with HCC is still a mystery. Moreover, there might be an important association between the gut microbiota, nutrition, life style and the progression of Cirrhosis and HCC. The aim of our study was to characterize the gut microbial signature in association with life style and nutrition of patients with Cirrhosis, HCC-Cirrhosis and healthy controls. Design: Stool samples were collected from 95 individuals (30 patients with HCC, 38 patients with Cirrhosis and 27 age, gender and BMI-matched healthy volunteers). All participants answered lifestyle and Food Frequency Questionnaires. 16S rRNA sequencing of fecal DNA was performed (MiSeq Illumina). Results: There was a significant decrease in alpha diversity in patients with Cirrhosis (qvalue=0.033) and in patients with HCC-Cirrhosis (qvalue=0.032) compared to healthy controls. The microbiota of patients with HCC-cirrhosis compared to patients with Cirrhosis, was characterized by a significant overrepresentation of Clostridium (pvalue=0.024) and CF231 (pvalue=0.010) and lower expression of Alphaproteobacteria (pvalue=0.039) and Verrucomicrobia (pvalue=0.036) in several taxonomic levels: Verrucomicrobiae, Verrucomicrobiales, Verrucomicrobiaceae and the genus Akkermansia (pvalue=0.039). Furthermore, we performed an analysis of predicted metabolic pathways (Kegg level 2) that resulted in a significant decrease in the diversity of metabolic pathways in patients with HCC-Cirrhosis (qvalue=0.015) compared to controls, one of which was amino acid metabolism. Furthermore, investigating the life style and nutrition habits of patients with HCC-Cirrhosis, we found significant correlations between intake of artificial sweeteners and Verrucomicrobia (qvalue=0.12), High sugar intake and Synergistetes (qvalue=0.021) and High BMI and the pathogen Campylobacter (qvalue=0.066). Furthermore, overweight in patients with HCC-Cirrhosis modified bacterial diversity (qvalue=0.023) and composition (qvalue=0.033). Conclusions: To the best of the our knowledge, we present the first report of the gut microbial composition in patients with HCC-Cirrhosis, compared with Cirrhotic patients and healthy controls. We have demonstrated in our study that there are significant differences in the gut microbiome of patients with HCC-cirrhosis compared to Cirrhotic patients and healthy controls. Our findings are even more pronounced because the significantly increased bacteria Clostridium and CF231 in HCC-Cirrhosis weren't influenced by diet and lifestyle, implying this change is due to the development of HCC. Further studies are needed to confirm these findings and assess causality.

Keywords: Cirrhosis, Hepatocellular carcinoma, life style, liver disease, microbiome, nutrition

Procedia PDF Downloads 107
2453 Purposes of Urdu Translations of the Meanings of Holy Quran

Authors: Muhammad Saleem

Abstract:

The research paper entitled above would be a comprehensive and critical study of translations of the meanings of the Holy Qur’an. The discussion will deal with the targets & purposes of Urdu (National Language of Pakistan) translators of the meanings of the Holy Qur’an. There are more than 400 translations of the meanings of the Holy Qur’an in the Urdu Language. Muslims, non-Muslims and some organizations have made translations of the meanings of the Holy Qur’an to meet various targets. It is observed that all Urdu translators have not translated the Qur’an with a single objective and motivation; rather, some are biased and strive to discredit the Qur’an. Thus, they have made unauthentic and fabricated translations of the Qur’an. Some optimistically believe that they intend to do a service, whereas others pessimistically hold that they treacherously seek to further their rule. Some of them have been observed to be against Islam, starting their activities with spite, but after perceiving the truths of Islam and the miracle and greatness of the Holy Qur’an, they submitted to Islam, embracing it with pure hearts. Some translators made their translations of the meanings of the Holy Qur’an to serve Allah, and some of them have done their translations to earn only. All these translations vary from one to another due to style, trend, type, method and style. Some Urdu translations have been made to fulfill the lingual requirements. Some translations have been made by Muslim scholars to reduce the influence of Urdu translations of the meanings of the Holy Qur’an by Non-Muslims. The article deals with the various purposes of the translators of the meanings of the Holy Qur’an.

Keywords: Qur'an, translation, urdu, language

Procedia PDF Downloads 13
2452 Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features

Authors: Nadia Masood Khan, Muhammad Salman Khan, Gul Muhammad Khan

Abstract:

Cardiologists perform cardiac auscultation to detect abnormalities in heart sounds. Since accurate auscultation is a crucial first step in screening patients with heart diseases, there is a need to develop computer-aided detection/diagnosis (CAD) systems to assist cardiologists in interpreting heart sounds and provide second opinions. In this paper different algorithms are implemented for automated heart sound classification using unsegmented phonocardiogram (PCG) signals. Support vector machine (SVM), artificial neural network (ANN) and cartesian genetic programming evolved artificial neural network (CGPANN) without the application of any segmentation algorithm has been explored in this study. The signals are first pre-processed to remove any unwanted frequencies. Both time and frequency domain features are then extracted for training the different models. The different algorithms are tested in multiple scenarios and their strengths and weaknesses are discussed. Results indicate that SVM outperforms the rest with an accuracy of 73.64%.

Keywords: pattern recognition, machine learning, computer aided diagnosis, heart sound classification, and feature extraction

Procedia PDF Downloads 244
2451 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification

Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti

Abstract:

Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.

Keywords: fluvial auto-classification concept, mapping, geomorphology, river

Procedia PDF Downloads 357
2450 Early Recognition and Grading of Cataract Using a Combined Log Gabor/Discrete Wavelet Transform with ANN and SVM

Authors: Hadeer R. M. Tawfik, Rania A. K. Birry, Amani A. Saad

Abstract:

Eyes are considered to be the most sensitive and important organ for human being. Thus, any eye disorder will affect the patient in all aspects of life. Cataract is one of those eye disorders that lead to blindness if not treated correctly and quickly. This paper demonstrates a model for automatic detection, classification, and grading of cataracts based on image processing techniques and artificial intelligence. The proposed system is developed to ease the cataract diagnosis process for both ophthalmologists and patients. The wavelet transform combined with 2D Log Gabor Wavelet transform was used as feature extraction techniques for a dataset of 120 eye images followed by a classification process that classified the image set into three classes; normal, early, and advanced stage. A comparison between the two used classifiers, the support vector machine SVM and the artificial neural network ANN were done for the same dataset of 120 eye images. It was concluded that SVM gave better results than ANN. SVM success rate result was 96.8% accuracy where ANN success rate result was 92.3% accuracy.

Keywords: cataract, classification, detection, feature extraction, grading, log-gabor, neural networks, support vector machines, wavelet

Procedia PDF Downloads 313
2449 Electronic Nose Based on Metal Oxide Semiconductor Sensors as an Alternative Technique for the Spoilage Classification of Oat Milk

Authors: A. Deswal, N. S. Deora, H. N. Mishra

Abstract:

The aim of the present study was to develop a rapid method for electronic nose for online quality control of oat milk. Analysis by electronic nose and bacteriological measurements were performed to analyse spoilage kinetics of oat milk samples stored at room temperature and refrigerated conditions for up to 15 days. Principal component analysis (PCA), discriminant factorial analysis (DFA) and soft independent modelling by class analogy (SIMCA) classification techniques were used to differentiate the samples of oat milk at different days. The total plate count (bacteriological method) was selected as the reference method to consistently train the electronic nose system. The e-nose was able to differentiate between the oat milk samples of varying microbial load. The results obtained by the bacteria total viable counts showed that the shelf-life of oat milk stored at room temperature and refrigerated conditions were 20 hours and 13 days, respectively. The models built classified oat milk samples based on the total microbial population into “unspoiled” and “spoiled”.

Keywords: electronic-nose, bacteriological, shelf-life, classification

Procedia PDF Downloads 246
2448 A Biologically Inspired Approach to Automatic Classification of Textile Fabric Prints Based On Both Texture and Colour Information

Authors: Babar Khan, Wang Zhijie

Abstract:

Machine Vision has been playing a significant role in Industrial Automation, to imitate the wide variety of human functions, providing improved safety, reduced labour cost, the elimination of human error and/or subjective judgments, and the creation of timely statistical product data. Despite the intensive research, there have not been any attempts to classify fabric prints based on printed texture and colour, most of the researches so far encompasses only black and white or grey scale images. We proposed a biologically inspired processing architecture to classify fabrics w.r.t. the fabric print texture and colour. We created a texture descriptor based on the HMAX model for machine vision, and incorporated colour descriptor based on opponent colour channels simulating the single opponent and double opponent neuronal function of the brain. We found that our algorithm not only outperformed the original HMAX algorithm on classification of fabric print texture and colour, but we also achieved a recognition accuracy of 85-100% on different colour and different texture fabric.

Keywords: automatic classification, texture descriptor, colour descriptor, opponent colour channel

Procedia PDF Downloads 467
2447 A New Scheme for Chain Code Normalization in Arabic and Farsi Scripts

Authors: Reza Shakoori

Abstract:

This paper presents a structural correction of Arabic and Persian strokes using manipulation of their chain codes in order to improve the rate and performance of Persian and Arabic handwritten word recognition systems. It collects pure and effective features to represent a character with one consolidated feature vector and reduces variations in order to decrease the number of training samples and increase the chance of successful classification. Our results also show that how the proposed approaches can simplify classification and consequently recognition by reducing variations and possible noises on the chain code by keeping orientation of characters and their backbone structures.

Keywords: Arabic, chain code normalization, OCR systems, image processing

Procedia PDF Downloads 388
2446 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong

Abstract:

This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.

Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 198
2445 Application of Neuroscience in Aligning Instructional Design to Student Learning Style

Authors: Jayati Bhattacharjee

Abstract:

Teaching is a very dynamic profession. Teaching Science is as much challenging as Learning the subject if not more. For instance teaching of Chemistry. From the introductory concepts of subatomic particles to atoms of elements and their symbols and further presenting the chemical equation and so forth is a challenge on both side of the equation Teaching Learning. This paper combines the Neuroscience of Learning and memory with the knowledge of Learning style (VAK) and presents an effective tool for the teacher to authenticate Learning. The model of ‘Working Memory’, the Visio-spatial sketchpad, the central executive and the phonological loop that transforms short-term memory to long term memory actually supports the psychological theory of Learning style i.e. Visual –Auditory-Kinesthetic. A closer examination of David Kolbe’s learning model suggests that learning requires abilities that are polar opposites, and that the learner must continually choose which set of learning abilities he or she will use in a specific learning situation. In grasping experience some of us perceive new information through experiencing the concrete, tangible, felt qualities of the world, relying on our senses and immersing ourselves in concrete reality. Others tend to perceive, grasp, or take hold of new information through symbolic representation or abstract conceptualization – thinking about, analyzing, or systematically planning, rather than using sensation as a guide. Similarly, in transforming or processing experience some of us tend to carefully watch others who are involved in the experience and reflect on what happens, while others choose to jump right in and start doing things. The watchers favor reflective observation, while the doers favor active experimentation. Any lesson plan based on the model of Prescriptive design: C+O=M (C: Instructional condition; O: Instructional Outcome; M: Instructional method). The desired outcome and conditions are independent variables whereas the instructional method is dependent hence can be planned and suited to maximize the learning outcome. The assessment for learning rather than of learning can encourage, build confidence and hope amongst the learners and go a long way to replace the anxiety and hopelessness that a student experiences while learning Science with a human touch in it. Application of this model has been tried in teaching chemistry to high school students as well as in workshops with teachers. The response received has proven the desirable results.

Keywords: working memory model, learning style, prescriptive design, assessment for learning

Procedia PDF Downloads 334
2444 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 479
2443 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 105
2442 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 234
2441 Reservoir Fluids: Occurrence, Classification, and Modeling

Authors: Ahmed El-Banbi

Abstract:

Several PVT models exist to represent how PVT properties are handled in sub-surface and surface engineering calculations for oil and gas production. The most commonly used models include black oil, modified black oil (MBO), and compositional models. These models are used in calculations that allow engineers to optimize and forecast well and reservoir performance (e.g., reservoir simulation calculations, material balance, nodal analysis, surface facilities, etc.). The choice of which model is dependent on fluid type and the production process (e.g., depletion, water injection, gas injection, etc.). Based on close to 2,000 reservoir fluid samples collected from different basins and locations, this paper presents some conclusions on the occurrence of reservoir fluids. It also reviews the common methods used to classify reservoir fluid types. Based on new criteria related to the production behavior of different fluids and economic considerations, an updated classification of reservoir fluid types is presented in the paper. Recommendations on the use of different PVT models to simulate the behavior of different reservoir fluid types are discussed. Each PVT model requirement is highlighted. Available methods for the calculation of PVT properties from each model are also discussed. Practical recommendations and tips on how to control the calculations to achieve the most accurate results are given.

Keywords: PVT models, fluid types, PVT properties, fluids classification

Procedia PDF Downloads 58
2440 Short Text Classification for Saudi Tweets

Authors: Asma A. Alsufyani, Maram A. Alharthi, Maha J. Althobaiti, Manal S. Alharthi, Huda Rizq

Abstract:

Twitter is one of the most popular microblogging sites that allows users to publish short text messages called 'tweets'. Increasing the number of accounts to follow (followings) increases the number of tweets that will be displayed from different topics in an unclassified manner in the timeline of the user. Therefore, it can be a vital solution for many Twitter users to have their tweets in a timeline classified into general categories to save the user’s time and to provide easy and quick access to tweets based on topics. In this paper, we developed a classifier for timeline tweets trained on a dataset consisting of 3600 tweets in total, which were collected from Saudi Twitter and annotated manually. We experimented with the well-known Bag-of-Words approach to text classification, and we used support vector machines (SVM) in the training process. The trained classifier performed well on a test dataset, with an average F1-measure equal to 92.3%. The classifier has been integrated into an application, which practically proved the classifier’s ability to classify timeline tweets of the user.

Keywords: corpus creation, feature extraction, machine learning, short text classification, social media, support vector machine, Twitter

Procedia PDF Downloads 138
2439 Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation

Authors: Seynabou Toure, Oumar Diop, Kidiyo Kpalma, Amadou S. Maiga

Abstract:

Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature.

Keywords: classification, coastline, color, sea-land segmentation

Procedia PDF Downloads 230
2438 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis

Procedia PDF Downloads 310
2437 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 156
2436 Impact of Leadership Styles on Work Motivation and Organizational Commitment among Faculty Members of Public Sector Universities in Punjab

Authors: Wajeeha Shahid

Abstract:

The study was designed to assess the impact of transformational and transactional leadership styles on work motivation and organizational commitment among faculty members of universities of Punjab. 713 faculty members were selected as sample through convenient random sampling technique. Three self-constructed questionnaires namely Leadership Styles Questionnaire (LSQ), Work Motivation Questionnaire (WMQ) and Organizational Commitment Questionnaire (OCMQ) were used as research instruments. Major objectives of the study included assessing the effect and impact of transformational and transactional leadership styles on work motivation and organizational commitment. Theoretical frame work of the study included Idealized Influence, Inspirational Motivation, Intellectual Stimulation, Individualized Consideration, Contingent Rewards and Management by Exception as independent variables and Extrinsic motivation, Intrinsic motivation, Affective commitment, Continuance commitment and Normative commitment as dependent variables. SPSS Version 21 was used to analyze and tabulate data. Cronbach's Alpha reliability, Pearson Correlation and Multiple regression analysis were applied as statistical treatments for the analysis. Results revealed that Idealized Influence correlated significantly with intrinsic motivation and Affective commitment whereas Contingent rewards had a strong positive correlation with extrinsic motivation and affective commitment. Multiple regression models revealed a variance of 85% for transformational leadership style over work motivation and organizational commitment. Whereas transactional style as a predictor manifested a variance of 79% for work motivation and 76% for organizational commitment. It was suggested that changing organizational cultures are demanding more from their leadership. All organizations need to consider transformational leadership style as an important part of their equipment in leveraging both soft and hard organizational targets.

Keywords: leadership styles, work motivation, organizational commitment, faculty member

Procedia PDF Downloads 293
2435 Enhancing Spatial Interpolation: A Multi-Layer Inverse Distance Weighting Model for Complex Regression and Classification Tasks in Spatial Data Analysis

Authors: Yakin Hajlaoui, Richard Labib, Jean-François Plante, Michel Gamache

Abstract:

This study introduces the Multi-Layer Inverse Distance Weighting Model (ML-IDW), inspired by the mathematical formulation of both multi-layer neural networks (ML-NNs) and Inverse Distance Weighting model (IDW). ML-IDW leverages ML-NNs' processing capabilities, characterized by compositions of learnable non-linear functions applied to input features, and incorporates IDW's ability to learn anisotropic spatial dependencies, presenting a promising solution for nonlinear spatial interpolation and learning from complex spatial data. it employ gradient descent and backpropagation to train ML-IDW, comparing its performance against conventional spatial interpolation models such as Kriging and standard IDW on regression and classification tasks using simulated spatial datasets of varying complexity. the results highlight the efficacy of ML-IDW, particularly in handling complex spatial datasets, exhibiting lower mean square error in regression and higher F1 score in classification.

Keywords: deep learning, multi-layer neural networks, gradient descent, spatial interpolation, inverse distance weighting

Procedia PDF Downloads 29
2434 Radar Track-based Classification of Birds and UAVs

Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo

Abstract:

In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).

Keywords: birds, classification, machine learning, UAVs

Procedia PDF Downloads 205
2433 Emotional and Personal Characteristics of Children in Relation to the Parental Attitudes

Authors: Svetlana S. Saveysheva, Victoria E. Vasilenko

Abstract:

The purpose of the research was to study the emotional and personal characteristics of preschool children in relation to the characteristics of child-parent interaction and deviant parental attitudes. The study involved 172 mothers and 172 children (85 boys and 87 girls) aged 4,5 to 7 years (mean age 6 years) living in St. Petersburg, Russia. Methods used were, demographic questionnaire, projective drawing method 'House-Tree-Man', Test of anxiety (Temml, Dorki, Amen), technique of studying self-esteem 'Ladder', expert evaluation of sociability and aggressiveness, questionnaire for children-parent emotional interaction (E.I. Zaharova) and questionnaire 'Analysis of family relationships' (E.G. Eidemiller, V.V. Yustitsky). Results. The greatest number of links with personal characteristics have received such parental deviant attitudes as overprotection and characteristics of authoritarian style (prohibitions, sanctions). If the mother has such peculiarities of the parental relationship, the child is characterized by lower self-esteem, increased anxiety, distrust of themselves and hostility. Children have more pronounced manifestations of aggression in a conniving and unstable style of parenting. The sensitivity of the mother is positively associated with children’s self-esteem. Unconditional acceptance of the child, the predominance of a positive emotional background, orientation to the state of the child during interaction promote the development of communication skills and reduce of aggressiveness. But the excessive closeness of the mother with the child can make it difficult to develop the communicative skills. Conclusions. The greatest influence on emotional and personal characteristics is provided by such features of the parental relation as overprotection, characteristics of authoritarian style, underdevelopment of the sphere of parental feelings, sensitivity of mother and behavioral manifestations of emotional interaction. Research is supported by RFBR №18-013-00990.

Keywords: characteristics of personality, child-parent interaction, children, deviant parental attitudes

Procedia PDF Downloads 223
2432 Poetic Music by the Poet, Commander of the Faithful, Muhammad Bello: Prosodical Study

Authors: Sirajo Muhammad Sokoto

Abstract:

The Commander of the Faithful, Muhammad Bello, is considered one of the most distinguished scholars and poetic geniuses who is famous for reciting poetry in the classical vertical style. He is also represented by pre-Islamic poets such as Imru’ al-Qays and Alqamah and among the Islamists such as Hassan bin Thabit, Amr bin Abi Rabi’ah, and others. The poet drew from the seas of the Arabic language and its styles at the hands of His father, Sheikh Othman Bin Fodio, and his uncle, Sheikh Abdullah Bin Fodio, are both things that made Muhammad Bello conversant with the Arabic language until he was able to write poetry in a beautiful format and good style. The Commander of the Faithful, Muhammad Bello, did not deviate from what the Arabs know of poetic elements, such as taking into account its meanings and music; Muhammadu Bello has used every Bahr of prosody and its technicals in many of his poems. This article prepares the reader for the efforts made by the poet Muhammad Bello in composing poems on poetic seas, taking into account musical tones for different purposes according to his desire. The article will also discuss the poet’s talent, skill, and eloquence.

Keywords: music, Muhammad Bello, poetry, performances

Procedia PDF Downloads 56
2431 Content Based Instruction: An Interdisciplinary Approach in Promoting English Language Competence

Authors: Sanjeeb Kumar Mohanty

Abstract:

Content Based Instruction (CBI) in English Language Teaching (ELT) basically helps English as Second Language (ESL) learners of English. At the same time, it fosters multidisciplinary style of learning by promoting collaborative learning style. It is an approach to teaching ESL that attempts to combine language with interdisciplinary learning for bettering language proficiency and facilitating content learning. Hence, the basic purpose of CBI is that language should be taught in conjunction with academic subject matter. It helps in establishing the content as well as developing language competency. This study aims at supporting the potential values of interdisciplinary approach in promoting English Language Learning (ELL) by teaching writing skills to a small group of learners and discussing the findings with the teachers from various disciplines in a workshop. The teachers who are oriented, they use the same approach in their classes collaboratively. The inputs from the learners as well as the teachers hopefully raise positive consciousness with regard to the vast benefits that Content Based Instruction can offer in advancing the language competence of the learners.

Keywords: content based instruction, interdisciplinary approach, writing skills, collaborative approach

Procedia PDF Downloads 259
2430 Deep Graph Embeddings for the Analysis of Short Heartbeat Interval Time Series

Authors: Tamas Madl

Abstract:

Sudden cardiac death (SCD) constitutes a large proportion of cardiovascular mortalities, provides little advance warning, and the risk is difficult to recognize based on ubiquitous, low cost medical equipment such as the standard, 12-lead, ten second ECG. Autonomic abnormalities have been shown to be strongly predictive of SCD risk; yet current methods are not trivially applicable to the brevity and low temporal and electrical resolution of standard ECGs. Here, we build horizontal visibility graph representations of very short inter-beat interval time series, and perform unsuper- vised representation learning in order to convert these variable size objects into fixed-length vectors preserving similarity rela- tions. We show that such representations facilitate classification into healthy vs. at-risk patients on two different datasets, the Mul- tiparameter Intelligent Monitoring in Intensive Care II and the PhysioNet Sudden Cardiac Death Holter Database. Our results suggest that graph representation learning of heartbeat interval time series facilitates robust classification even in sequences as short as ten seconds.

Keywords: sudden cardiac death, heart rate variability, ECG analysis, time series classification

Procedia PDF Downloads 221
2429 Lexical Classification of Compounds in Berom: A Semantic Description of N-V Nominal Compounds

Authors: Pam Bitrus Marcus

Abstract:

Compounds in Berom, a Niger-Congo language that is spoken in parts of central Nigeria, have been understudied, and the semantics of N-V nominal compounds have not been sufficiently delineated. This study describes the lexical classification of compounds in Berom and, specifically, examines the semantics of nominal compounds with N-V constituents. The study relied on a data set of 200 compounds that were drawn from Bere Naha (a newsletter publication in Berom). Contrary to the nominalization process in defining the lexical class of compounds in languages, the study revealed that verbal and adjectival classes of compounds are also attested in Berom and N-V nominal compounds have an agentive or locative interpretation that is not solely determined by the meaning of the constituents of the compound but by the context of the usage.

Keywords: berom, berom compounds, nominal compound, N-V compounds

Procedia PDF Downloads 61
2428 Frequency of the English Phrasal Verbs Used by Iranian Learners as a Reference to the Style of Writing Adopted by the Learners

Authors: Hamzeh Mazaherylaghab, Mehrangiz Vahabian, Seyyedeh Zahra Asghari

Abstract:

The present study initially focused on the frequency of phrasal verbs used by Iranian learners of English. The results then needed to be compared to the findings from native speaker corpora. After the extraction of phrasal verbs from learner and native-speaker corpora the findings were analysed. The results showed that Iranian learners avoided using phrasal verbs in many cases. Some of the findings proved to be significant. It was also found that the learners used the single-word counterparts of the avoided phrasal verbs to compensate for their lack of knowledge in many cases. Semantic complexity and Lack of L1 counterpart may have been the main reasons for avoidance, but despite the avoidance phenomenon, the learners displayed a tendency to use many other phrasal verbs which may have been due to the increase in the number of multi-word verbs in Persian. The overall scores confirmed the fact that the language produced by the learners illustrates signs of more formal style in comparison with the native speakers of English by using less phrasal verbs and more formal single word verbs instead.

Keywords: corpus, corpora, LOCNESS, phrasal verbs, single-word verb

Procedia PDF Downloads 185
2427 Application of Fuzzy Clustering on Classification Agile Supply Chain Firms

Authors: Hamidreza Fallah Lajimi, Elham Karami, Alireza Arab, Fatemeh Alinasab

Abstract:

Being responsive is an increasingly important skill for firms in today’s global economy; thus firms must be agile. Naturally, it follows that an organization’s agility depends on its supply chain being agile. However, achieving supply chain agility is a function of other abilities within the organization. This paper analyses results from a survey of 71 Iran manufacturing companies in order to identify some of the factors for agile organizations in managing their supply chains. Then we classification this company in four cluster with fuzzy c-mean technique and with Four validations functional determine automatically the optimal number of clusters.

Keywords: agile supply chain, clustering, fuzzy clustering, business engineering

Procedia PDF Downloads 690