Search results for: gp-closed sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1222

Search results for: gp-closed sets

1192 A New Learning Automata-Based Algorithm to the Priority-Based Target Coverage Problem in Directional Sensor Networks

Authors: Shaharuddin Salleh, Sara Marouf, Hosein Mohammadi

Abstract:

Directional sensor networks (DSNs) have recently attracted a great deal of attention due to their extensive applications in a wide range of situations. One of the most important problems associated with DSNs is covering a set of targets in a given area and, at the same time, maximizing the network lifetime. This is due to limitation in sensing angle and battery power of the directional sensors. This problem gets more complicated by the possibility that targets may have different coverage requirements. In the present study, this problem is referred to as priority-based target coverage (PTC). As sensors are often densely deployed, organizing the sensors into several cover sets and then activating these cover sets successively is a promising solution to this problem. In this paper, we propose a learning automata-based algorithm to organize the directional sensors into several cover sets in such a way that each cover set could satisfy coverage requirements of all the targets. Several experiments are conducted to evaluate the performance of the proposed algorithm. The results demonstrated that the algorithms were able to contribute to solving the problem.

Keywords: directional sensor networks, target coverage problem, cover set formation, learning automata

Procedia PDF Downloads 376
1191 Analysis of Production Forecasting in Unconventional Gas Resources Development Using Machine Learning and Data-Driven Approach

Authors: Dongkwon Han, Sangho Kim, Sunil Kwon

Abstract:

Unconventional gas resources have dramatically changed the future energy landscape. Unlike conventional gas resources, the key challenges in unconventional gas have been the requirement that applies to advanced approaches for production forecasting due to uncertainty and complexity of fluid flow. In this study, artificial neural network (ANN) model which integrates machine learning and data-driven approach was developed to predict productivity in shale gas. The database of 129 wells of Eagle Ford shale basin used for testing and training of the ANN model. The Input data related to hydraulic fracturing, well completion and productivity of shale gas were selected and the output data is a cumulative production. The performance of the ANN using all data sets, clustering and variables importance (VI) models were compared in the mean absolute percentage error (MAPE). ANN model using all data sets, clustering, and VI were obtained as 44.22%, 10.08% (cluster 1), 5.26% (cluster 2), 6.35%(cluster 3), and 32.23% (ANN VI), 23.19% (SVM VI), respectively. The results showed that the pre-trained ANN model provides more accurate results than the ANN model using all data sets.

Keywords: unconventional gas, artificial neural network, machine learning, clustering, variables importance

Procedia PDF Downloads 167
1190 Quantile Coherence Analysis: Application to Precipitation Data

Authors: Yaeji Lim, Hee-Seok Oh

Abstract:

The coherence analysis measures the linear time-invariant relationship between two data sets and has been studied various fields such as signal processing, engineering, and medical science. However classical coherence analysis tends to be sensitive to outliers and focuses only on mean relationship. In this paper, we generalized cross periodogram to quantile cross periodogram and provide richer inter-relationship between two data sets. This is a general version of Laplace cross periodogram. We prove its asymptotic distribution under the long range process and compare them with ordinary coherence through numerical examples. We also present real data example to confirm the usefulness of quantile coherence analysis.

Keywords: coherence, cross periodogram, spectrum, quantile

Procedia PDF Downloads 354
1189 Exploring Counting Methods for the Vertices of Certain Polyhedra with Uncertainties

Authors: Sammani Danwawu Abdullahi

Abstract:

Vertex Enumeration Algorithms explore the methods and procedures of generating the vertices of general polyhedra formed by system of equations or inequalities. These problems of enumerating the extreme points (vertices) of general polyhedra are shown to be NP-Hard. This lead to exploring how to count the vertices of general polyhedra without listing them. This is also shown to be #P-Complete. Some fully polynomial randomized approximation schemes (fpras) of counting the vertices of some special classes of polyhedra associated with Down-Sets, Independent Sets, 2-Knapsack problems and 2 x n transportation problems are presented together with some discovered open problems.

Keywords: counting with uncertainties, mathematical programming, optimization, vertex enumeration

Procedia PDF Downloads 312
1188 Application of a New Efficient Normal Parameter Reduction Algorithm of Soft Sets in Online Shopping

Authors: Xiuqin Ma, Hongwu Qin

Abstract:

A new efficient normal parameter reduction algorithm of soft set in decision making was proposed. However, up to the present, few documents have focused on real-life applications of this algorithm. Accordingly, we apply a New Efficient Normal Parameter Reduction algorithm into real-life datasets of online shopping, such as Blackberry Mobile Phone Dataset. Experimental results show that this algorithm is not only suitable but feasible for dealing with the online shopping.

Keywords: soft sets, parameter reduction, normal parameter reduction, online shopping

Procedia PDF Downloads 479
1187 A Probabilistic View of the Spatial Pooler in Hierarchical Temporal Memory

Authors: Mackenzie Leake, Liyu Xia, Kamil Rocki, Wayne Imaino

Abstract:

In the Hierarchical Temporal Memory (HTM) paradigm the effect of overlap between inputs on the activation of columns in the spatial pooler is studied. Numerical results suggest that similar inputs are represented by similar sets of columns and dissimilar inputs are represented by dissimilar sets of columns. It is shown that the spatial pooler produces these results under certain conditions for the connectivity and proximal thresholds. Following the discussion of the initialization of parameters for the thresholds, corresponding qualitative arguments about the learning dynamics of the spatial pooler are discussed.

Keywords: hierarchical temporal memory, HTM, learning algorithms, machine learning, spatial pooler

Procedia PDF Downloads 307
1186 The Phenomena of False Cognates and Deceptive Cognates: Issues to Foreign Language Learning and Teaching Methodology Based on Set Theory

Authors: Marilei Amadeu Sabino

Abstract:

The aim of this study is to establish differences between the terms ‘false cognates’, ‘false friends’ and ‘deceptive cognates’, usually considered to be synonyms. It will be shown they are not synonyms, since they do not designate the same linguistic process or phenomenon. Despite their differences in meaning, many pairs of formally similar words in two (or more) different languages are true cognates, although they are usually known as ‘false’ cognates – such as, for instance, the English and Italian lexical items ‘assist x assistere’; ‘attend x attendere’; ‘argument x argomento’; ‘apology x apologia’; ‘camera x camera’; ‘cucumber x cocomero’; ‘fabric x fabbrica’; ‘factory x fattoria’; ‘firm x firma’; ‘journal x giornale’; ‘library x libreria’; ‘magazine x magazzino’; ‘parent x parente’; ‘preservative x preservativo’; ‘pretend x pretendere’; ‘vacancy x vacanza’, to name but a few examples. Thus, one of the theoretical objectives of this paper is firstly to elaborate definitions establishing a distinction between the words that are definitely ‘false cognates’ (derived from different etyma) and those that are just ‘deceptive cognates’ (derived from the same etymon). Secondly, based on Set Theory and on the concepts of equal sets, subsets, intersection of sets and disjoint sets, this study is intended to elaborate some theoretical and practical questions that will be useful in identifying more precisely similarities and differences between cognate words of different languages, and according to graphic interpretation of sets it will be possible to classify them and provide discernment about the processes of semantic changes. Therefore, these issues might be helpful not only to the Learning of Second and Foreign Languages, but they could also give insights into Foreign and Second Language Teaching Methodology. Acknowledgements: FAPESP – São Paulo State Research Support Foundation – the financial support offered (proc. n° 2017/02064-7).

Keywords: deceptive cognates, false cognates, foreign language learning, teaching methodology

Procedia PDF Downloads 302
1185 Predicting the Diagnosis of Alzheimer’s Disease: Development and Validation of Machine Learning Models

Authors: Jay L. Fu

Abstract:

Patients with Alzheimer's disease progressively lose their memory and thinking skills and, eventually, the ability to carry out simple daily tasks. The disease is irreversible, but early detection and treatment can slow down the disease progression. In this research, publicly available MRI data and demographic data from 373 MRI imaging sessions were utilized to build models to predict dementia. Various machine learning models, including logistic regression, k-nearest neighbor, support vector machine, random forest, and neural network, were developed. Data were divided into training and testing sets, where training sets were used to build the predictive model, and testing sets were used to assess the accuracy of prediction. Key risk factors were identified, and various models were compared to come forward with the best prediction model. Among these models, the random forest model appeared to be the best model with an accuracy of 90.34%. MMSE, nWBV, and gender were the three most important contributing factors to the detection of Alzheimer’s. Among all the models used, the percent in which at least 4 of the 5 models shared the same diagnosis for a testing input was 90.42%. These machine learning models allow early detection of Alzheimer’s with good accuracy, which ultimately leads to early treatment of these patients.

Keywords: Alzheimer's disease, clinical diagnosis, magnetic resonance imaging, machine learning prediction

Procedia PDF Downloads 106
1184 An Ab Initio Molecular Orbital Theory and Density Functional Theory Study of Fluorous 1,3-Dion Compounds

Authors: S. Ghammamy, M. Mirzaabdollahiha

Abstract:

Quantum mechanical calculations of energies, geometries, and vibrational wavenumbers of fluorous 1,3-dion compounds are carried out using density functional theory (DFT/B3LYP) method with LANL2DZ basis sets. The calculated HOMO and LUMO energies show that charge transfer occurs in the molecules. The thermodynamic functions of fluorous 1,3-dion compounds have been performed at B3LYP/LANL2DZ basis sets. The theoretical spectrograms for F NMR spectra of fluorous 1,3-dion compounds have also been constructed. The F NMR nuclear shieldings of fluoride ligands in fluorous 1,3-dion compounds have been studied quantum chemical.

Keywords: density function theory, natural bond orbital, HOMO, LOMO, fluorous

Procedia PDF Downloads 356
1183 Prediction of Marine Ecosystem Changes Based on the Integrated Analysis of Multivariate Data Sets

Authors: Prozorkevitch D., Mishurov A., Sokolov K., Karsakov L., Pestrikova L.

Abstract:

The current body of knowledge about the marine environment and the dynamics of marine ecosystems includes a huge amount of heterogeneous data collected over decades. It generally includes a wide range of hydrological, biological and fishery data. Marine researchers collect these data and analyze how and why the ecosystem changes from past to present. Based on these historical records and linkages between the processes it is possible to predict future changes. Multivariate analysis of trends and their interconnection in the marine ecosystem may be used as an instrument for predicting further ecosystem evolution. A wide range of information about the components of the marine ecosystem for more than 50 years needs to be used to investigate how these arrays can help to predict the future.

Keywords: barents sea ecosystem, abiotic, biotic, data sets, trends, prediction

Procedia PDF Downloads 70
1182 An Exploratory Investigation into the Quality of Life of People with Multi-Drug Resistant Pulmonary Tuberculosis (MDR-PTB) Using the ICF Core Sets: A Preliminary Investigation

Authors: Shamila Manie, Soraya Maart, Ayesha Osman

Abstract:

Introduction: People diagnosed with multidrug resistant pulmonary tuberculosis (MDR-PTB) is subjected to prolonged hospitalization in South Africa. It has thus become essential for research to shift its focus from a purely medical approach, but to include social and environmental factors when looking at the impact of the disease on those affected. Aim: To explore the factors affecting individuals with multi-drug resistant pulmonary tuberculosis during long-term hospitalization using the comprehensive ICF core-sets for obstructive pulmonary disease (OPD) and cardiopulmonary (CPR) conditions at Brooklyn Chest Hospital (BCH). Methods: A quantitative descriptive, cross-sectional study design was utilized. A convenient sample of 19 adults at Brooklyn Chest Hospital were interviewed. Results: Most participants reported a decrease in exercise tolerance levels (b455: n=11). However it did not limit participation. Participants reported that a lack of privacy in the environment (e155) was a barrier to health. The presence of health professionals (e355) and the provision of skills development services (e585) are facilitators to health and well-being. No differences exist in the functional ability of HIV positive and negative participants in this sample. Conclusion: The ICF Core Sets appeared valid in identifying the barriers and facilitators experienced by individuals with MDR-PTB admitted to BCH. The hospital environment must be improved to add to the QoL of those admitted, especially improving privacy within the wards. Although the social grant is seen as a facilitator, greater emphasis must be placed on preparing individuals to be economically active in the labour for when they are discharged.

Keywords: multidrug resistant tuberculosis, MDR ICF core sets, health-related quality of life (HRQoL), hospitalization

Procedia PDF Downloads 311
1181 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 278
1180 Study the Difference Between the Mohr-Coulomb and the Barton-Bandis Joint Constitutive Models: A Case Study from the Iron Open Pit Mine, Canada

Authors: Abbas Kamalibandpey, Alain Beland, Joseph Mukendi Kabuya

Abstract:

Since a rock mass is a discontinuum medium, its behaviour is governed by discontinuities such as faults, joint sets, lithologic contact, and bedding planes. Thus, rock slope stability analysis in jointed rock masses is largely dependent upon discontinuities constitutive equations. This paper studies the difference between the Mohr-Coulomb (MC) and the Barton-Bandis (BB) joint constitutive numerical models for lithological contacts and joint sets. For the rock in these models, generalized Hoek-Brown criteria have been considered. The joint roughness coefficient (JRC) and the joint wall compressive strength (JCS) are vital parameters in the BB model. The numerical models are applied to the rock slope stability analysis in the Mont-Wright (MW) mine. The Mont-Wright mine is owned and operated by ArcelorMittal Mining Canada (AMMC), one of the largest iron-ore open pit operations in Canada. In this regard, one of the high walls of the mine has been selected to undergo slope stability analysis with RS2D software, finite element method. Three piezometers have been installed in this zone to record pore water pressure and it is monitored by radar. In this zone, the AMP-IF and QRMS-IF contacts and very persistent and altered joint sets in IF control the rock slope behaviour. The height of the slope is more than 250 m and consists of different lithologies such as AMP, IF, GN, QRMS, and QR. To apply the B-B model, the joint sets and geological contacts have been scanned by Maptek, and their JRC has been calculated by different methods. The numerical studies reveal that the JRC of geological contacts, AMP-IF and QRMS-IF, and joint sets in IF had a significant influence on the safety factor. After evaluating the results of rock slope stability analysis and the radar data, the B-B constitutive equation for discontinuities has shown acceptable results to the real condition in the mine. It should be noted that the difference in safety factors in MC and BB joint constitutive models in some cases is more than 30%.

Keywords: barton-Bandis criterion, Hoek-brown and Mohr-Coulomb criteria, open pit, slope stability

Procedia PDF Downloads 58
1179 Using Gene Expression Programming in Learning Process of Rough Neural Networks

Authors: Sanaa Rashed Abdallah, Yasser F. Hassan

Abstract:

The paper will introduce an approach where a rough sets, gene expression programming and rough neural networks are used cooperatively for learning and classification support. The Objective of gene expression programming rough neural networks (GEP-RNN) approach is to obtain new classified data with minimum error in training and testing process. Starting point of gene expression programming rough neural networks (GEP-RNN) approach is an information system and the output from this approach is a structure of rough neural networks which is including the weights and thresholds with minimum classification error.

Keywords: rough sets, gene expression programming, rough neural networks, classification

Procedia PDF Downloads 341
1178 An Experimental Exploration of the Interaction between Consumer Ethics Perceptions, Legality Evaluations, and Mind-Sets

Authors: Daphne Sobolev, Niklas Voege

Abstract:

During the last three decades, consumer ethics perceptions have attracted the attention of a large number of researchers. Nevertheless, little is known about the effect of the cognitive and situational contexts of the decision on ethics judgments. In this paper, the interrelationship between consumers’ ethics perceptions, legality evaluations and mind-sets are explored. Legality evaluations represent the cognitive context of the ethical judgments, whereas mind-sets represent their situational context. Drawing on moral development theories and priming theories, it is hypothesized that both factors are significantly related to consumer ethics perceptions. To test this hypothesis, 289 participants were allocated to three mind-set experimental conditions and a control group. Participants in the mind-set conditions were primed for aggressiveness, politeness or awareness to the negative legal consequences of breaking the law. Mind-sets were induced using a sentence-unscrambling task, in which target words were included. Ethics and legality judgments were assessed using consumer ethics and internet ethics questionnaires. All participants were asked to rate the ethicality and legality of consumer actions described in the questionnaires. The results showed that consumer ethics and legality perceptions were significantly correlated. Moreover, including legality evaluations as a variable in ethics judgment models increased the predictive power of the models. In addition, inducing aggressiveness in participants reduced their sensitivity to ethical issues; priming awareness to negative legal consequences increased their sensitivity to ethics when uncertainty about the legality of the judged scenario was high. Furthermore, the correlation between ethics and legality judgments was significant overall mind-set conditions. However, the results revealed conflicts between ethics and legality perceptions: consumers considered 10%-14% of the presented behaviors unethical and legal, or ethical and illegal. In 10-23% of the questions, participants indicated that they did not know whether the described action was legal or not. In addition, an asymmetry between the effects of aggressiveness and politeness priming was found. The results show that the legality judgments and mind-sets interact with consumer ethics perceptions. Thus, they portray consumer ethical judgments as dynamical processes which are inseparable from other cognitive processes and situational variables. They highlight that legal and ethical education, as well as adequate situational cues at the service place, could have a positive effect on consumer ethics perceptions. Theoretical contribution is discussed.

Keywords: consumer ethics, legality judgments, mind-set, priming, aggressiveness

Procedia PDF Downloads 259
1177 Applied Canonical Correlation Analysis to Explore the Relationship between Resourcefulness and Quality of Life in Cancer Population

Authors: Chiou-Fang Liou

Abstract:

Cancer has been one of the most life-threaten diseases worldwide for 30+ years. The influences of cancer illness include symptoms from cancer itself along with its treatments. The quality of life among patients diagnosed with cancer during cancer treatments has been conceptualized within four domains: Functional Well-Being, Social Well-Being, Physical Well-Being, and Emotional Well-Being. Patients with cancer often need to make adjustments to face all the challenges. The middle-range theory of Resourcefulness and Quality of life has been applied to explore factors contributing to cancer patients’ needs. Resourcefulness is defined as sets of skills that can be learned and consisted of Person and Social Resourcefulness. Empirical evidence also supported a possible relationship between Resourcefulness and Quality of Life. However, little is known about the extent to which the two concepts are related to each other. This study, therefore, applied a multivariate technique, Canonical Correlation Analysis, to identify the relationship between the two sets of variables with multi-dimensional measures, the Resourcefulness and Quality of Life in Cancer patients receiving treatments. After IRB approval, this multi-centered study took place at two medical centers in the Central Region of Taiwan. Sample A total of 186 patients with various cancer diagnoses and either receiving radiation therapy or chemotherapy consented to and answered questionnaires. The Import findings of the Generalized F test identified two typical sets with several linear relations and explained a total of 79.1% of the total variance. The first typical set found Personal Resourcefulness negatively related to Social Well-being, Functional being, Emotional Well-being, and Physical, in that order. The second typical set found Social Resourcefulness negatively related to Functional Well-being and Physical-being yet positively related to Social Well-being and Emotional Well-being. Discussion and Conclusion, The results of this presented study supported the statistically significant relationship between two sets of variables that are consistent with the theory. In addition, the results are considerably important in cancer patients receiving cancer treatments.

Keywords: cancer, canonical correlation analysis, quality of life, resourcefulness

Procedia PDF Downloads 36
1176 Using Combination of Different Sets of Features of Molecules for Improved Prediction of Solubility

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Generally, absorption and bioavailability increase if solubility increases; therefore, it is crucial to predict them in drug discovery applications. Molecular descriptors and Molecular properties are traditionally used for the prediction of water solubility. There are various key descriptors that are used for this purpose, namely Drogan Descriptors, Morgan Descriptors, Maccs keys, etc., and each has different prediction capabilities with differentiating successes between different data sets. Another source for the prediction of solubility is structural features; they are commonly used for the prediction of solubility. However, there are little to no studies that combine three or more properties or descriptors for prediction to produce a more powerful prediction model. Unlike available models, we used a combination of those features in a random forest machine learning model for improved solubility prediction to better predict and, therefore, contribute to drug discovery systems.

Keywords: solubility, molecular descriptors, machine learning, random forest

Procedia PDF Downloads 12
1175 On Modeling Data Sets by Means of a Modified Saddlepoint Approximation

Authors: Serge B. Provost, Yishan Zhang

Abstract:

A moment-based adjustment to the saddlepoint approximation is introduced in the context of density estimation. First applied to univariate distributions, this methodology is extended to the bivariate case. It then entails estimating the density function associated with each marginal distribution by means of the saddlepoint approximation and applying a bivariate adjustment to the product of the resulting density estimates. The connection to the distribution of empirical copulas will be pointed out. As well, a novel approach is proposed for estimating the support of distribution. As these results solely rely on sample moments and empirical cumulant-generating functions, they are particularly well suited for modeling massive data sets. Several illustrative applications will be presented.

Keywords: empirical cumulant-generating function, endpoints identification, saddlepoint approximation, sample moments, density estimation

Procedia PDF Downloads 121
1174 Investigation of the Effects of Simple Heating Processes on the Crystallization of Bi₂WO₆

Authors: Cisil Gulumser, Francesc Medina, Sevil Veli

Abstract:

In this study, the synthesis of photocatalytic Bi₂WO₆ was practiced with simple heating processes and the effects of these treatments on the production of the desired compound were investigated. For this purpose, experiments with Bi(NO₃)₃.5H₂O and H₂WO₄ precursors were carried out to synthesize Bi₂WO₆ by four different combinations. These four combinations were grouped in two main sets as ‘treated in microwave reactor’ and ‘directly filtrated’; additionally these main sets were grouped into two subsets as ‘calcined’ and ‘not calcined’. Calcination processes were conducted at temperatures of 400ᵒC, 600ᵒC, and 800ᵒC. X-ray diffraction (XRD) and environmental scanning electron microscopy (ESEM) analyses were performed in order to investigate the crystal structure of powdered product synthesized with each combination. The highest crystallization of produced compounds was observed for calcination at 600ᵒC from each main group.

Keywords: bismuth tungstate, crystallization, microwave, photocatalysts

Procedia PDF Downloads 139
1173 Economic Valuation of Forest Landscape Function Using a Conditional Logit Model

Authors: A. J. Julius, E. Imoagene, O. A. Ganiyu

Abstract:

The purpose of this study is to estimate the economic value of the services and functions rendered by the forest landscape using a conditional logit model. For this study, attributes and levels of forest landscape were chosen; specifically, attributes include topographical forest type, forest type, forest density, recreational factor (side trip, accessibility of valley), and willingness to participate (WTP). Based on these factors, 48 choices sets with balanced and orthogonal form using statistical analysis system (SAS) 9.1 was adopted. The efficiency of the questionnaire was 6.02 (D-Error. 0.1), and choice set and socio-economic variables were analyzed. To reduce the cognitive load of respondents, the 48 choice sets were divided into 4 types in the questionnaire, so that respondents could respond to 12 choice sets, respectively. The study populations were citizens from seven metropolitan cities including Ibadan, Ilorin, Osogbo, etc. and annual WTP per household was asked by using the interview questionnaire, a total of 267 copies were recovered. As a result, Oshogbo had 0.45, and the statistical similarities could not be found except for urban forests, forest density, recreational factor, and level of WTP. Average annual WTP per household for forest landscape was 104,758 Naira (Nigerian currency) based on the outcome from this model, total economic value of the services and functions enjoyed from Nigerian forest landscape has reached approximately 1.6 trillion Naira.

Keywords: economic valuation, urban cities, services, forest landscape, logit model, nigeria

Procedia PDF Downloads 89
1172 A Sociocybernetics Data Analysis Using Causality in Tourism Networks

Authors: M. Lloret-Climent, J. Nescolarde-Selva

Abstract:

The aim of this paper is to propose a mathematical model to determine invariant sets, set covering, orbits and, in particular, attractors in the set of tourism variables. Analysis was carried out based on a pre-designed algorithm and applying our interpretation of chaos theory developed in the context of General Systems Theory. This article sets out the causal relationships associated with tourist flows in order to enable the formulation of appropriate strategies. Our results can be applied to numerous cases. For example, in the analysis of tourist flows, these findings can be used to determine whether the behaviour of certain groups affects that of other groups and to analyse tourist behaviour in terms of the most relevant variables. Unlike statistical analyses that merely provide information on current data, our method uses orbit analysis to forecast, if attractors are found, the behaviour of tourist variables in the immediate future.

Keywords: attractor, invariant set, tourist flows, orbits, social responsibility, tourism, tourist variables

Procedia PDF Downloads 474
1171 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques

Authors: Tomas Trainys, Algimantas Venckauskas

Abstract:

Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.

Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.

Procedia PDF Downloads 113
1170 Training a Neural Network Using Input Dropout with Aggressive Reweighting (IDAR) on Datasets with Many Useless Features

Authors: Stylianos Kampakis

Abstract:

This paper presents a new algorithm for neural networks called “Input Dropout with Aggressive Re-weighting” (IDAR) aimed specifically at datasets with many useless features. IDAR combines two techniques (dropout of input neurons and aggressive re weighting) in order to eliminate the influence of noisy features. The technique can be seen as a generalization of dropout. The algorithm is tested on two different benchmark data sets: a noisy version of the iris dataset and the MADELON data set. Its performance is compared against three other popular techniques for dealing with useless features: L2 regularization, LASSO and random forests. The results demonstrate that IDAR can be an effective technique for handling data sets with many useless features.

Keywords: neural networks, feature selection, regularization, aggressive reweighting

Procedia PDF Downloads 418
1169 Liver Lesion Extraction with Fuzzy Thresholding in Contrast Enhanced Ultrasound Images

Authors: Abder-Rahman Ali, Adélaïde Albouy-Kissi, Manuel Grand-Brochier, Viviane Ladan-Marcus, Christine Hoeffl, Claude Marcus, Antoine Vacavant, Jean-Yves Boire

Abstract:

In this paper, we present a new segmentation approach for focal liver lesions in contrast enhanced ultrasound imaging. This approach, based on a two-cluster Fuzzy C-Means methodology, considers type-II fuzzy sets to handle uncertainty due to the image modality (presence of speckle noise, low contrast, etc.), and to calculate the optimum inter-cluster threshold. Fine boundaries are detected by a local recursive merging of ambiguous pixels. The method has been tested on a representative database. Compared to both Otsu and type-I Fuzzy C-Means techniques, the proposed method significantly reduces the segmentation errors.

Keywords: defuzzification, fuzzy clustering, image segmentation, type-II fuzzy sets

Procedia PDF Downloads 448
1168 A Similarity Measure for Classification and Clustering in Image Based Medical and Text Based Banking Applications

Authors: K. P. Sandesh, M. H. Suman

Abstract:

Text processing plays an important role in information retrieval, data-mining, and web search. Measuring the similarity between the documents is an important operation in the text processing field. In this project, a new similarity measure is proposed. To compute the similarity between two documents with respect to a feature the proposed measure takes the following three cases into account: (1) The feature appears in both documents; (2) The feature appears in only one document and; (3) The feature appears in none of the documents. The proposed measure is extended to gauge the similarity between two sets of documents. The effectiveness of our measure is evaluated on several real-world data sets for text classification and clustering problems, especially in banking and health sectors. The results show that the performance obtained by the proposed measure is better than that achieved by the other measures.

Keywords: document classification, document clustering, entropy, accuracy, classifiers, clustering algorithms

Procedia PDF Downloads 477
1167 Homomorphic Conceptual Framework for Effective Supply Chain Strategy (HCEFSC) within Operational Research (OR) with Sustainability and Phenomenology

Authors: Hussain Abdullah Al-Salamin, Elias Ogutu Azariah Tembe

Abstract:

Supply chain (SC) is an operational research (OR) approach and technique which acts as catalyst within central nervous system of business today. Without SC, any type of business is at doldrums, hence entropy. SC is the lifeblood of business today because it is the pivotal hub which provides imperative competitive advantage. The paper present a conceptual framework dubbed as Homomorphic Conceptual Framework for Effective Supply Chain Strategy (HCEFSC).The term homomorphic is derived from abstract algebraic mathematical term homomorphism (same shape) which also embeds the following mathematical application sets: monomorphism, isomorphism, automorphisms, and endomorphism. The HCFESC is intertwined and integrated with wide and broad sets of elements.

Keywords: homomorphism, isomorphism, monomorphisms, automorphisms, epimorphisms, endomorphism, supply chain, operational research (OR)

Procedia PDF Downloads 337
1166 Effects of the Different Recovery Durations on Some Physiological Parameters during 3 X 3 Small-Sided Games in Soccer

Authors: Samet Aktaş, Nurtekin Erkmen, Faruk Guven, Halil Taskin

Abstract:

This study aimed to determine the effects of 3 versus 3 small-sided games (SSG) with different recovery times on soma physiological parameters in soccer players. Twelve soccer players from Regional Amateur League volunteered for this study (mean±SD age, 20.50±2.43 years; height, 177.73±4.13 cm; weight, 70.83±8.38 kg). Subjects were performing soccer training for five days per week. The protocol of the study was approved by the local ethic committee in School of Physical Education and Sport, Selcuk University. The subjects were divided into teams with 3 players according to Yo-Yo Intermittent Recovery Test. The field dimension was 26 m wide and 34 m in length. Subjects performed two times in a random order a series of 3 bouts of 3-a-side SSGs with 3 min and 5 min recovery durations. In SSGs, each set were performed with 6 min duration. The percent of maximal heart rate (% HRmax), blood lactate concentration (LA) and Rated Perceived Exertion (RPE) scale points were collected before the SSGs and at the end of each set. Data were analyzed by analysis of variance (ANOVA) with repeated measures. Significant differences were found between %HRmax in before SSG and 1st set, 2nd set, and 3rd set in both SSG with 3 min recovery duration and SSG with 5 min recovery duration (p<0.05). Means of %HRmax in SSG with 3 min recovery duration at both 1st and 2nd sets were significantly higher than SSG with 5 min recovery duration (p<0.05). No significant difference was found between sets of either SSGs in terms of LA (p>0.05). LA in SSG with 3 min recovery duration was higher than SSG with 5 min recovery duration at 2nd sets (p<0.05). RPE in soccer players was not different between SSGs (p>0.05).In conclusion, this study demonstrates that exercise intensity in SSG with 3 min recovery durations is higher than SSG with 5 min recovery durations.

Keywords: small-sided games, soccer, heart rate, lactate

Procedia PDF Downloads 424
1165 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining

Procedia PDF Downloads 134
1164 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 248
1163 Generalized Rough Sets Applied to Graphs Related to Urban Problems

Authors: Mihai Rebenciuc, Simona Mihaela Bibic

Abstract:

Branch of modern mathematics, graphs represent instruments for optimization and solving practical applications in various fields such as economic networks, engineering, network optimization, the geometry of social action, generally, complex systems including contemporary urban problems (path or transport efficiencies, biourbanism, & c.). In this paper is studied the interconnection of some urban network, which can lead to a simulation problem of a digraph through another digraph. The simulation is made univoc or more general multivoc. The concepts of fragment and atom are very useful in the study of connectivity in the digraph that is simulation - including an alternative evaluation of k- connectivity. Rough set approach in (bi)digraph which is proposed in premier in this paper contribute to improved significantly the evaluation of k-connectivity. This rough set approach is based on generalized rough sets - basic facts are presented in this paper.

Keywords: (bi)digraphs, rough set theory, systems of interacting agents, complex systems

Procedia PDF Downloads 202