Search results for: overview of porosity classification
984 Genetic Folding: Analyzing the Mercer-s Kernels Effect in Support Vector Machine using Genetic Folding
Authors: Mohd A. Mezher, Maysam F. Abbod
Abstract:
Genetic Folding (GF) a new class of EA named as is introduced for the first time. It is based on chromosomes composed of floating genes structurally organized in a parent form and separated by dots. Although, the genotype/phenotype system of GF generates a kernel expression, which is the objective function of superior classifier. In this work the question of the satisfying mapping-s rules in evolving populations is addressed by analyzing populations undergoing either Mercer-s or none Mercer-s rule. The results presented here show that populations undergoing Mercer-s rules improve practically models selection of Support Vector Machine (SVM). The experiment is trained multi-classification problem and tested on nonlinear Ionosphere dataset. The target of this paper is to answer the question of evolving Mercer-s rule in SVM addressed using either genetic folding satisfied kernel-s rules or not applied to complicated domains and problems.Keywords: Genetic Folding, GF, Evolutionary Algorithms, Support Vector Machine, Genetic Algorithm, Genetic Programming, Multi-Classification, Mercer's Rules
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627983 Varieties of Capitalism and Small Business CSR: A Comparative Overview
Authors: S. Looser, W. Wehrmeyer
Abstract:
Given the limited research on Small and Mediumsized Enterprises’ (SMEs) contribution to Corporate Social Responsibility (CSR) and even scarcer research on Swiss SMEs, this paper helps to fill these gaps by enabling the identification of supranational SME parameters. Thus, the paper investigates the current state of SME practices in Switzerland and across 15 other countries. Combining the degree to which SMEs demonstrate an explicit (or business case) approach or see CSR as an implicit moral activity with the assessment of their attributes for “variety of capitalism” defines the framework of this comparative analysis. To outline Swiss small business CSR patterns in particular, 40 SME owner-managers were interviewed. A secondary data analysis of studies from different countries laid groundwork for this comparative overview of small business CSR. The paper identifies Swiss small business CSR as driven by norms, values, and by the aspiration to contribute to society, thus, as an implicit part of the day-to-day business. Similar to most Central European, Mediterranean, Nordic, and Asian countries, explicit CSR is still very rare in Swiss SMEs. Astonishingly, also British and American SMEs follow this pattern in spite of their strong and distinctly liberal market economies. Though other findings show that nationality matters this research concludes that SME culture and an informal CSR agenda are strongly formative and superseding even forces of market economies, nationally cultural patterns, and language. Hence, classifications of countries by their market system, as found in the comparative capitalism literature, do not match the CSR practices in SMEs as they do not mirror the peculiarities of their business. This raises questions on the universality and generalisability of unmediated, explicit management concepts, especially in the context of small firms.
Keywords: CSR, comparative study, cultures of capitalism, Small and Medium-sized Enterprises.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2264982 Automatic Musical Genre Classification Using Divergence and Average Information Measures
Authors: Hassan Ezzaidi, Jean Rouat
Abstract:
Recently many research has been conducted to retrieve pertinent parameters and adequate models for automatic music genre classification. In this paper, two measures based upon information theory concepts are investigated for mapping the features space to decision space. A Gaussian Mixture Model (GMM) is used as a baseline and reference system. Various strategies are proposed for training and testing sessions with matched or mismatched conditions, long training and long testing, long training and short testing. For all experiments, the file sections used for testing are never been used during training. With matched conditions all examined measures yield the best and similar scores (almost 100%). With mismatched conditions, the proposed measures yield better scores than the GMM baseline system, especially for the short testing case. It is also observed that the average discrimination information measure is most appropriate for music category classifications and on the other hand the divergence measure is more suitable for music subcategory classifications.Keywords: Audio feature, information measures, music genre.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577981 Statistical Texture Analysis
Authors: G. N. Srinivasan, G. Shobha
Abstract:
This paper presents an overview of the methodologies and algorithms for statistical texture analysis of 2D images. Methods for digital-image texture analysis are reviewed based on available literature and research work either carried out or supervised by the authors.Keywords: Image Texture, Texture Analysis, Statistical Approaches, Structural approaches, spectral approaches, Morphological approaches, Fractals, Fourier Transforms, Gabor Filters, Wavelet transforms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 939980 Characterization, Classification and Agricultural Potentials of Soils on a Toposequence in Southern Guinea Savanna of Nigeria
Authors: B. A. Lawal, A. G. Ojanuga, P. A. Tsado, A. Mohammed
Abstract:
This work assessed some properties of three pedons on a toposequence in Ijah-Gbagyi district in Niger State, Nigeria. The pedons were designated as JG1, JG2 and JG3 representing the upper, middle and lower slopes respectively. The surface soil was characterized by dark yellowish brown (10YR3/4) color at the JG1 and JG2 and very dark grayish brown (10YR3/2) color at JG3. Sand dominated the mineral fraction and its content in the surface horizon decreased down the slope, whereas silt content increased down the slope due to sorting by geological and pedogenic processes. Although organic carbon (OC), total nitrogen (TN) and available phosphorus (P) were rated high, TN and available P decreased down the slope. High cation exchange capacity (CEC) was an indication that the soils have high potential for plant nutrients retention. The pedons were classified as Typic Haplustepts/ Haplic Cambisols (Eutric), Plinthic Petraquepts/ Petric Plinthosols (Abruptic) and Typic Endoaquepts/ Endogleyic Cambisols (Endoclayic).
Keywords: Ecological region, landscape positions, soil characterization, soil classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4338979 Stabilization of Clay Soil Using A-3 Soil
Authors: Mohammed Mustapha Alhaji, Salawu Sadiku
Abstract:
A clay soil classified as A-7-6 and CH soil according to AASHTO and unified soil classification system respectively, was stabilized using A-3 soil (AASHTO soil classification system). The clay soil was replaced with 0%, 10%, 20%, to 100% A-3 soil, compacted at both British Standard Light (BSL) and British Standard Heavy (BSH) compaction energy levels and using Unconfined Compressive Strength (UCS) as evaluation criteria. The Maximum Dry Density (MDD) of the treated soils at both the BSL and BSH compaction energy levels showed increase from 0% to 40% A-3 soil replacement after which the values reduced to 100% replacement. The trend of the Optimum Moisture Content (OMC) with varied A-3 soil replacement was similar to that of MDD but in a reversed order. The OMC reduced from 0% to 40% A-3 soil replacement after which the values increased to 100% replacement. This trend was attributed to the observed reduction in void ratio from 0% to 40% replacement after which the void ratio increased to 100% replacement. The maximum UCS for the soil at varied A-3 soil replacement increased from 272 and 770 kN/m2 for BSL and BSH compaction energy level at 0% replacement to 295 and 795 kN/m2 for BSL and BSH compaction energy level respectively at 10% replacement after which the values reduced to 22 and 60 kN/m2 for BSL and BSH compaction energy level respectively at 70% replacement. Beyond 70% replacement, the mixtures could not be moulded for UCS test.Keywords: A-3 soil, clay soil, pozzolanic action, stabilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2402978 Use of RFID Technology for Identification, Traceability Monitoring and the Checking of Product Authenticity
Authors: Adriana Alexandru, Eleonora Tudora, Ovidiu Bica
Abstract:
This paper is an overview of the structure of Radio Frequency Identification (RFID) systems and radio frequency bands used by RFID technology. It also presents a solution based on the application of RFID for brand authentication, traceability and tracking, by implementing a production management system and extending its use to traders.Keywords: Radio Frequency Identification, Tag, Tag reader, Traceability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2597977 Velocity Distribution in Open Channels with Sand: An Experimental Study
Authors: E. Keramaris
Abstract:
In this study, laboratory experiments in open channel flows over a sand bed were conducted. A porous bed (sand bed) with porosity of ε=0.70 and porous thickness of s΄=3 cm was tested. Vertical distributions of velocity were evaluated by using a two-dimensional (2D) Particle Image Velocimetry (PIV). Velocity profiles are measured above the impermeable bed and above the sand bed for the same different total water heights (h= 6, 8, 10 and 12 cm) and for the same slope S=1.5. Measurements of mean velocity indicate the effects of the bed material used (sand bed) on the flow characteristics (Velocity distribution and Reynolds number) in comparison with those above the impermeable bed.
Keywords: Particle image velocimetry, sand bed, velocity distribution, Reynolds number.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711976 Reducing SAGE Data Using Genetic Algorithms
Authors: Cheng-Hong Yang, Tsung-Mu Shih, Li-Yeh Chuang
Abstract:
Serial Analysis of Gene Expression is a powerful quantification technique for generating cell or tissue gene expression data. The profile of the gene expression of cell or tissue in several different states is difficult for biologists to analyze because of the large number of genes typically involved. However, feature selection in machine learning can successfully reduce this problem. The method allows reducing the features (genes) in specific SAGE data, and determines only relevant genes. In this study, we used a genetic algorithm to implement feature selection, and evaluate the classification accuracy of the selected features with the K-nearest neighbor method. In order to validate the proposed method, we used two SAGE data sets for testing. The results of this study conclusively prove that the number of features of the original SAGE data set can be significantly reduced and higher classification accuracy can be achieved.Keywords: Serial Analysis of Gene Expression, Feature selection, Genetic Algorithm, K-nearest neighbor method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610975 Classification of Health Risk Factors to Predict the Risk of Falling in Older Adults
Authors: L. Lindsay, S. A. Coleman, D. Kerr, B. J. Taylor, A. Moorhead
Abstract:
Cognitive decline and frailty is apparent in older adults leading to an increased likelihood of the risk of falling. Currently health care professionals have to make professional decisions regarding such risks, and hence make difficult decisions regarding the future welfare of the ageing population. This study uses health data from The Irish Longitudinal Study on Ageing (TILDA), focusing on adults over the age of 50 years, in order to analyse health risk factors and predict the likelihood of falls. This prediction is based on the use of machine learning algorithms whereby health risk factors are used as inputs to predict the likelihood of falling. Initial results show that health risk factors such as long-term health issues contribute to the number of falls. The identification of such health risk factors has the potential to inform health and social care professionals, older people and their family members in order to mitigate daily living risks.
Keywords: Classification, falls, health risk factors, machine learning, older adults.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1055974 Improving Fake News Detection Using K-means and Support Vector Machine Approaches
Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy
Abstract:
Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.
Keywords: Fake news detection, feature selection, support vector machine, K-means clustering, machine learning, social media.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4524973 Assessing Land Cover Change Trajectories in Olomouc, Czech Republic
Authors: Mukesh Singh Boori, Vít Voženílek
Abstract:
Olomouc is a unique and complex landmark with widespread forestation and land use. This research work was conducted to assess important and complex land use change trajectories in Olomouc region. Multi-temporal satellite data from 1991, 2001 and 2013 were used to extract land use/cover types by object oriented classification method. To achieve the objectives, three different aspects were used: (1) Calculate the quantity of each transition; (2) Allocate location based landscape pattern (3) Compare land use/cover evaluation procedure. Land cover change trajectories shows that 16.69% agriculture, 54.33% forest and 21.98% other areas (settlement, pasture and water-body) were stable in all three decade. Approximately 30% of the study area maintained as a same land cove type from 1991 to 2013. Here broad scale of political and socioeconomic factors was also affect the rate and direction of landscape changes. Distance from the settlements was the most important predictor of land cover change trajectories. This showed that most of landscape trajectories were caused by socio-economic activities and mainly led to virtuous change on the ecological environment.
Keywords: Remote Sensing, land use/cover, Change trajectories, Image classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2867972 Motor Imaginary Signal Classification Using Adaptive Recursive Bandpass Filter and Adaptive Autoregressive Models for Brain Machine Interface Designs
Authors: Vickneswaran Jeyabalan, Andrews Samraj, Loo Chu Kiong
Abstract:
The noteworthy point in the advancement of Brain Machine Interface (BMI) research is the ability to accurately extract features of the brain signals and to classify them into targeted control action with the easiest procedures since the expected beneficiaries are of disabled. In this paper, a new feature extraction method using the combination of adaptive band pass filters and adaptive autoregressive (AAR) modelling is proposed and applied to the classification of right and left motor imagery signals extracted from the brain. The introduction of the adaptive bandpass filter improves the characterization process of the autocorrelation functions of the AAR models, as it enhances and strengthens the EEG signal, which is noisy and stochastic in nature. The experimental results on the Graz BCI data set have shown that by implementing the proposed feature extraction method, a LDA and SVM classifier outperforms other AAR approaches of the BCI 2003 competition in terms of the mutual information, the competition criterion, or misclassification rate.
Keywords: Adaptive autoregressive, adaptive bandpass filter, brain machine Interface, EEG, motor imaginary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2901971 Energy Retrofitting Application Research to Achieve Energy Efficiency in Hot-Arid Climates in Residential Buildings: A Case Study of Saudi Arabia
Authors: A. Felimban, A. Prieto, U. Knaack, T. Klein
Abstract:
This study aims to present an overview of recent research in building energy-retrofitting strategy applications and analyzing them within the context of hot arid climate regions which is in this case study represented by the Kingdom of Saudi Arabia. The main goal of this research is to do an analytical study of recent research approaches to show where the primary gap in knowledge exists and outline which possible strategies are available that can be applied in future research. Also, the paper focuses on energy retrofitting strategies at a building envelop level. The study is limited to specific measures within the hot arid climate region. Scientific articles were carefully chosen as they met the expression criteria, such as retrofitting, energy-retrofitting, hot-arid, energy efficiency, residential buildings, which helped narrow the research scope. Then the papers were explored through descriptive analysis and justified results within the Saudi context in order to draw an overview of future opportunities from the field of study for the last two decades. The conclusions of the analysis of the recent research confirmed that the field of study had a research shortage on investigating actual applications and testing of newly introduced energy efficiency applications, lack of energy cost feasibility studies and there was also a lack of public awareness. In terms of research methods, it was found that simulation software was a major instrument used in energy retrofitting application research. The main knowledge gaps that were identified included the need for certain research regarding actual application testing; energy retrofitting strategies application feasibility; the lack of research on the importance of how strategies apply first followed by the user acceptance of developed scenarios.
Keywords: Energy efficiency, energy retrofitting, hot arid climate, Saudi Arabia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 732970 Site Selection of Traffic Camera based on Dempster-Shafer and Bagging Theory
Authors: S. Rokhsari, M. Delavar, A. Sadeghi-Niaraki, A. Abed-Elmdoust, B. Moshiri
Abstract:
Traffic incident has bad effect on all parts of society so controlling road networks with enough traffic devices could help to decrease number of accidents, so using the best method for optimum site selection of these devices could help to implement good monitoring system. This paper has considered here important criteria for optimum site selection of traffic camera based on aggregation methods such as Bagging and Dempster-Shafer concepts. In the first step, important criteria such as annual traffic flow, distance from critical places such as parks that need more traffic controlling were identified for selection of important road links for traffic camera installation, Then classification methods such as Artificial neural network and Decision tree algorithms were employed for classification of road links based on their importance for camera installation. Then for improving the result of classifiers aggregation methods such as Bagging and Dempster-Shafer theories were used.Keywords: Aggregation, Bagging theory, Dempster-Shafer theory, Site selection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706969 A Comparison of SVM-based Criteria in Evolutionary Method for Gene Selection and Classification of Microarray Data
Authors: Rameswar Debnath, Haruhisa Takahashi
Abstract:
An evolutionary method whose selection and recombination operations are based on generalization error-bounds of support vector machine (SVM) can select a subset of potentially informative genes for SVM classifier very efficiently [7]. In this paper, we will use the derivative of error-bound (first-order criteria) to select and recombine gene features in the evolutionary process, and compare the performance of the derivative of error-bound with the error-bound itself (zero-order) in the evolutionary process. We also investigate several error-bounds and their derivatives to compare the performance, and find the best criteria for gene selection and classification. We use 7 cancer-related human gene expression datasets to evaluate the performance of the zero-order and first-order criteria of error-bounds. Though both criteria have the same strategy in theoretically, experimental results demonstrate the best criterion for microarray gene expression data.Keywords: support vector machine, generalization error-bound, feature selection, evolutionary algorithm, microarray data
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536968 Some Physical Properties of Musk Lime (Citrus Microcarpa)
Authors: M.H.R.O. Abdullah, P.E. Ch'ng, N.A. Yunus
Abstract:
Some physical properties of musk lime (Citrus microcarpa) were determined in this study. The average moisture content (wet basis) of the fruit was found to be 85.10 (±0.72) %. The mean of length, width and thickness of the fruit was 26.36 (±0.97), 26.40 (±1.04) and 25.26 (±0.94) mm respectively. The average value for geometric mean diameter, sphericity, aspect ratio, mass, surface area, volume, true density, bulk density and porosity was 26.00 (±0.82) mm, 98.67 (±2.04) %, 100.23 (±3.28) %, 10.007 (±0.878) g, 2125.07 (±133.93) mm2, 8800.00 (±731.82) mm3, 1002.87 (±39.16) kgm-3, 501.70 (±22.58) kgm-3 and 49.89 (±3.15) % respectively. The coefficient of static friction on four types of structural surface was found to be varying from 0.238 (±0.025) for glass to 0.247 (±0.024) for steel surface.Keywords: Musk lime, Citrus microcarpa, physical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3298967 Multivariate High Order Fuzzy Time Series Forecasting for Car Road Accidents
Authors: Tahseen A. Jilani, S. M. Aqil Burney, C. Ardil
Abstract:
In this paper, we have presented a new multivariate fuzzy time series forecasting method. This method assumes mfactors with one main factor of interest. History of past three years is used for making new forecasts. This new method is applied in forecasting total number of car accidents in Belgium using four secondary factors. We also make comparison of our proposed method with existing methods of fuzzy time series forecasting. Experimentally, it is shown that our proposed method perform better than existing fuzzy time series forecasting methods. Practically, actuaries are interested in analysis of the patterns of causalities in road accidents. Thus using fuzzy time series, actuaries can define fuzzy premium and fuzzy underwriting of car insurance and life insurance for car insurance. National Institute of Statistics, Belgium provides region of risk classification for each road. Thus using this risk classification, we can predict premium rate and underwriting of insurance policy holders.Keywords: Average forecasting error rate (AFER), Fuzziness offuzzy sets Fuzzy, If-Then rules, Multivariate fuzzy time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2490966 Machine Learning Approach for Identifying Dementia from MRI Images
Authors: S. K. Aruna, S. Chitra
Abstract:
This research paper presents a framework for classifying Magnetic Resonance Imaging (MRI) images for Dementia. Dementia, an age-related cognitive decline is indicated by degeneration of cortical and sub-cortical structures. Characterizing morphological changes helps understand disease development and contributes to early prediction and prevention of the disease. Modelling, that captures the brain’s structural variability and which is valid in disease classification and interpretation is very challenging. Features are extracted using Gabor filter with 0, 30, 60, 90 orientations and Gray Level Co-occurrence Matrix (GLCM). It is proposed to normalize and fuse the features. Independent Component Analysis (ICA) selects features. Support Vector Machine (SVM) classifier with different kernels is evaluated, for efficiency to classify dementia. This study evaluates the presented framework using MRI images from OASIS dataset for identifying dementia. Results showed that the proposed feature fusion classifier achieves higher classification accuracy.
Keywords: Magnetic resonance imaging, dementia, Gabor filter, gray level co-occurrence matrix, support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2116965 Platform-as-a-Service Sticky Policies for Privacy Classification in the Cloud
Authors: Maha Shamseddine, Amjad Nusayr, Wassim Itani
Abstract:
In this paper, we present a Platform-as-a-Service (PaaS) model for controlling the privacy enforcement mechanisms applied on user data when stored and processed in Cloud data centers. The proposed architecture consists of establishing user configurable ‘sticky’ policies on the Graphical User Interface (GUI) data-bound components during the application development phase to specify the details of privacy enforcement on the contents of these components. Various privacy classification classes on the data components are formally defined to give the user full control on the degree and scope of privacy enforcement including the type of execution containers to process the data in the Cloud. This not only enhances the privacy-awareness of the developed Cloud services, but also results in major savings in performance and energy efficiency due to the fact that the privacy mechanisms are solely applied on sensitive data units and not on all the user content. The proposed design is implemented in a real PaaS cloud computing environment on the Microsoft Azure platform.Keywords: Privacy enforcement, Platform-as-a-Service privacy awareness, cloud computing privacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 759964 Automatic Building an Extensive Arabic FA Terms Dictionary
Authors: El-Sayed Atlam, Masao Fuketa, Kazuhiro Morita, Jun-ichi Aoe
Abstract:
Field Association (FA) terms are a limited set of discriminating terms that give us the knowledge to identify document fields which are effective in document classification, similar file retrieval and passage retrieval. But the problem lies in the lack of an effective method to extract automatically relevant Arabic FA Terms to build a comprehensive dictionary. Moreover, all previous studies are based on FA terms in English and Japanese, and the extension of FA terms to other language such Arabic could be definitely strengthen further researches. This paper presents a new method to extract, Arabic FA Terms from domain-specific corpora using part-of-speech (POS) pattern rules and corpora comparison. Experimental evaluation is carried out for 14 different fields using 251 MB of domain-specific corpora obtained from Arabic Wikipedia dumps and Alhyah news selected average of 2,825 FA Terms (single and compound) per field. From the experimental results, recall and precision are 84% and 79% respectively. Therefore, this method selects higher number of relevant Arabic FA Terms at high precision and recall.
Keywords: Arabic Field Association Terms, information extraction, document classification, information retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734963 Voltage Problem Location Classification Using Performance of Least Squares Support Vector Machine LS-SVM and Learning Vector Quantization LVQ
Authors: Khaled Abduesslam. M, Mohammed Ali, Basher H Alsdai, Muhammad Nizam, Inayati
Abstract:
This paper presents the voltage problem location classification using performance of Least Squares Support Vector Machine (LS-SVM) and Learning Vector Quantization (LVQ) in electrical power system for proper voltage problem location implemented by IEEE 39 bus New- England. The data was collected from the time domain simulation by using Power System Analysis Toolbox (PSAT). Outputs from simulation data such as voltage, phase angle, real power and reactive power were taken as input to estimate voltage stability at particular buses based on Power Transfer Stability Index (PTSI).The simulation data was carried out on the IEEE 39 bus test system by considering load bus increased on the system. To verify of the proposed LS-SVM its performance was compared to Learning Vector Quantization (LVQ). The results showed that LS-SVM is faster and better as compared to LVQ. The results also demonstrated that the LS-SVM was estimated by 0% misclassification whereas LVQ had 7.69% misclassification.
Keywords: IEEE 39 bus, Least Squares Support Vector Machine, Learning Vector Quantization, Voltage Collapse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2405962 Clinical Decision Support for Disease Classification based on the Tests Association
Authors: Sung Ho Ha, Seong Hyeon Joo, Eun Kyung Kwon
Abstract:
Until recently, researchers have developed various tools and methodologies for effective clinical decision-making. Among those decisions, chest pain diseases have been one of important diagnostic issues especially in an emergency department. To improve the ability of physicians in diagnosis, many researchers have developed diagnosis intelligence by using machine learning and data mining. However, most of the conventional methodologies have been generally based on a single classifier for disease classification and prediction, which shows moderate performance. This study utilizes an ensemble strategy to combine multiple different classifiers to help physicians diagnose chest pain diseases more accurately than ever. Specifically the ensemble strategy is applied by using the integration of decision trees, neural networks, and support vector machines. The ensemble models are applied to real-world emergency data. This study shows that the performance of the ensemble models is superior to each of single classifiers.Keywords: Diagnosis intelligence, ensemble approach, data mining, emergency department
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634961 A Learning-Community Recommendation Approach for Web-Based Cooperative Learning
Authors: Jian-Wei Li, Yao-Tien Wang, Yi-Chun Chang
Abstract:
Cooperative learning has been defined as learners working together as a team to solve a problem to complete a task or to accomplish a common goal, which emphasizes the importance of interactions among members to promote the whole learning performance. With the popularity of society networks, cooperative learning is no longer limited to traditional classroom teaching activities. Since society networks facilitate to organize online learners, to establish common shared visions, and to advance learning interaction, the online community and online learning community have triggered the establishment of web-based societies. Numerous research literatures have indicated that the collaborative learning community is a critical issue to enhance learning performance. Hence, this paper proposes a learning community recommendation approach to facilitate that a learner joins the appropriate learning communities, which is based on k-nearest neighbor (kNN) classification. To demonstrate the viability of the proposed approach, the proposed approach is implemented for 117 students to recommend learning communities. The experimental results indicate that the proposed approach can effectively recommend appropriate learning communities for learners.
Keywords: k-nearest neighbor classification, learning community, Cooperative/Collaborative Learning and Environments.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1905960 Modern Problems of Russian Sport Legislation
Authors: Yurlov Sergey
Abstract:
The author examines modern problems of Russian sport legislation and whether it need to be changed in order to allow all sportsmen to participate, train and have another sportsmen’s rights as Russian law mandates. The article provides an overview of Russian sport legislation problems, provides examples of foreign countries. In addition, the author suggests solutions for existing legal problems.
Keywords: Amendment, legal problem, right, sport.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051959 A Real-Time Specific Weed Recognition System Using Statistical Methods
Authors: Imran Ahmed, Muhammad Islam, Syed Inayat Ali Shah, Awais Adnan
Abstract:
The identification and classification of weeds are of major technical and economical importance in the agricultural industry. To automate these activities, like in shape, color and texture, weed control system is feasible. The goal of this paper is to build a real-time, machine vision weed control system that can detect weed locations. In order to accomplish this objective, a real-time robotic system is developed to identify and locate outdoor plants using machine vision technology and pattern recognition. The algorithm is developed to classify images into broad and narrow class for real-time selective herbicide application. The developed algorithm has been tested on weeds at various locations, which have shown that the algorithm to be very effectiveness in weed identification. Further the results show a very reliable performance on weeds under varying field conditions. The analysis of the results shows over 90 percent classification accuracy over 140 sample images (broad and narrow) with 70 samples from each category of weeds.Keywords: Weed detection, Image Processing, real-timerecognition, Standard Deviation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2264958 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm
Authors: Ameur Abdelkader, Abed Bouarfa Hafida
Abstract:
Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.
Keywords: Predictive analysis, big data, predictive analysis algorithms. CART algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1075957 Artificial Intelligence Techniques Applications for Power Disturbances Classification
Authors: K.Manimala, Dr.K.Selvi, R.Ahila
Abstract:
Artificial Intelligence (AI) methods are increasingly being used for problem solving. This paper concerns using AI-type learning machines for power quality problem, which is a problem of general interest to power system to provide quality power to all appliances. Electrical power of good quality is essential for proper operation of electronic equipments such as computers and PLCs. Malfunction of such equipment may lead to loss of production or disruption of critical services resulting in huge financial and other losses. It is therefore necessary that critical loads be supplied with electricity of acceptable quality. Recognition of the presence of any disturbance and classifying any existing disturbance into a particular type is the first step in combating the problem. In this work two classes of AI methods for Power quality data mining are studied: Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs). We show that SVMs are superior to ANNs in two critical respects: SVMs train and run an order of magnitude faster; and SVMs give higher classification accuracy.
Keywords: back propagation network, power quality, probabilistic neural network, radial basis function support vector machine
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557956 Numerical Modelling of Crack Initiation around a Wellbore Due to Explosion
Authors: Meysam Lak, Mohammad Fatehi Marji, Alireza Yarahamdi Bafghi, Abolfazl Abdollahipour
Abstract:
A wellbore is a hole that is drilled to aid in the exploration and recovery of natural resources including oil and gas. Occasionally, in order to increase productivity index and porosity of the wellbore and reservoir, the well stimulation methods have been used. Hydraulic fracturing is one of these methods. Moreover, several explosions at the end of the well can stimulate the reservoir and create fractures around it. In this study, crack initiation in rock around the wellbore has been numerically modeled due to explosion. One, two, three, and four pairs of explosion have been set at the end of the wellbore on its wall. After each stage of the explosion, results have been presented and discussed. Results show that this method can initiate and probably propagate several fractures around the wellbore.
Keywords: Crack initiation, explosion, finite difference modelling, well productivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810955 Protein Graph Partitioning by Mutually Maximization of cycle-distributions
Authors: Frank Emmert Streib
Abstract:
The classification of the protein structure is commonly not performed for the whole protein but for structural domains, i.e., compact functional units preserved during evolution. Hence, a first step to a protein structure classification is the separation of the protein into its domains. We approach the problem of protein domain identification by proposing a novel graph theoretical algorithm. We represent the protein structure as an undirected, unweighted and unlabeled graph which nodes correspond the secondary structure elements of the protein. This graph is call the protein graph. The domains are then identified as partitions of the graph corresponding to vertices sets obtained by the maximization of an objective function, which mutually maximizes the cycle distributions found in the partitions of the graph. Our algorithm does not utilize any other kind of information besides the cycle-distribution to find the partitions. If a partition is found, the algorithm is iteratively applied to each of the resulting subgraphs. As stop criterion, we calculate numerically a significance level which indicates the stability of the predicted partition against a random rewiring of the protein graph. Hence, our algorithm terminates automatically its iterative application. We present results for one and two domain proteins and compare our results with the manually assigned domains by the SCOP database and differences are discussed.Keywords: Graph partitioning, unweighted graph, protein domains.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356