Search results for: mutant sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1352

Search results for: mutant sets

1262 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy

Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells

Abstract:

Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.

Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease

Procedia PDF Downloads 276
1261 Using Gene Expression Programming in Learning Process of Rough Neural Networks

Authors: Sanaa Rashed Abdallah, Yasser F. Hassan

Abstract:

The paper will introduce an approach where a rough sets, gene expression programming and rough neural networks are used cooperatively for learning and classification support. The Objective of gene expression programming rough neural networks (GEP-RNN) approach is to obtain new classified data with minimum error in training and testing process. Starting point of gene expression programming rough neural networks (GEP-RNN) approach is an information system and the output from this approach is a structure of rough neural networks which is including the weights and thresholds with minimum classification error.

Keywords: rough sets, gene expression programming, rough neural networks, classification

Procedia PDF Downloads 383
1260 An Experimental Exploration of the Interaction between Consumer Ethics Perceptions, Legality Evaluations, and Mind-Sets

Authors: Daphne Sobolev, Niklas Voege

Abstract:

During the last three decades, consumer ethics perceptions have attracted the attention of a large number of researchers. Nevertheless, little is known about the effect of the cognitive and situational contexts of the decision on ethics judgments. In this paper, the interrelationship between consumers’ ethics perceptions, legality evaluations and mind-sets are explored. Legality evaluations represent the cognitive context of the ethical judgments, whereas mind-sets represent their situational context. Drawing on moral development theories and priming theories, it is hypothesized that both factors are significantly related to consumer ethics perceptions. To test this hypothesis, 289 participants were allocated to three mind-set experimental conditions and a control group. Participants in the mind-set conditions were primed for aggressiveness, politeness or awareness to the negative legal consequences of breaking the law. Mind-sets were induced using a sentence-unscrambling task, in which target words were included. Ethics and legality judgments were assessed using consumer ethics and internet ethics questionnaires. All participants were asked to rate the ethicality and legality of consumer actions described in the questionnaires. The results showed that consumer ethics and legality perceptions were significantly correlated. Moreover, including legality evaluations as a variable in ethics judgment models increased the predictive power of the models. In addition, inducing aggressiveness in participants reduced their sensitivity to ethical issues; priming awareness to negative legal consequences increased their sensitivity to ethics when uncertainty about the legality of the judged scenario was high. Furthermore, the correlation between ethics and legality judgments was significant overall mind-set conditions. However, the results revealed conflicts between ethics and legality perceptions: consumers considered 10%-14% of the presented behaviors unethical and legal, or ethical and illegal. In 10-23% of the questions, participants indicated that they did not know whether the described action was legal or not. In addition, an asymmetry between the effects of aggressiveness and politeness priming was found. The results show that the legality judgments and mind-sets interact with consumer ethics perceptions. Thus, they portray consumer ethical judgments as dynamical processes which are inseparable from other cognitive processes and situational variables. They highlight that legal and ethical education, as well as adequate situational cues at the service place, could have a positive effect on consumer ethics perceptions. Theoretical contribution is discussed.

Keywords: consumer ethics, legality judgments, mind-set, priming, aggressiveness

Procedia PDF Downloads 297
1259 On the Other Side of Shining Mercury: In Silico Prediction of Cold Stabilizing Mutations in Serine Endopeptidase from Bacillus lentus

Authors: Debamitra Chakravorty, Pratap K. Parida

Abstract:

Cold-adapted proteases enhance wash performance in low-temperature laundry resulting in a reduction in energy consumption and wear of textiles and are also used in the dehairing process in leather industries. Unfortunately, the possible drawbacks of using cold-adapted proteases are their instability at higher temperatures. Therefore, proteases with broad temperature stability are required. Unfortunately, wild-type cold-adapted proteases exhibit instability at higher temperatures and thus have low shelf lives. Therefore, attempts to engineer cold-adapted proteases by protein engineering were made previously by directed evolution and random mutagenesis. The lacuna is the time, capital, and labour involved to obtain these variants are very demanding and challenging. Therefore, rational engineering for cold stability without compromising an enzyme's optimum pH and temperature for activity is the current requirement. In this work, mutations were rationally designed with the aid of high throughput computational methodology of network analysis, evolutionary conservation scores, and molecular dynamics simulations for Savinase from Bacillus lentus with the intention of rendering the mutants cold stable without affecting their temperature and pH optimum for activity. Further, an attempt was made to incorporate a mutation in the most stable mutant rationally obtained by this method to introduce oxidative stability in the mutant. Such enzymes are desired in detergents with bleaching agents. In silico analysis by performing 300 ns molecular dynamics simulations at 5 different temperatures revealed that these three mutants were found to be better in cold stability compared to the wild type Savinase from Bacillus lentus. Conclusively, this work shows that cold adaptation without losing optimum temperature and pH stability and additionally stability from oxidative damage can be rationally designed by in silico enzyme engineering. The key findings of this work were first, the in silico data of H5 (cold stable savinase) used as a control in this work, corroborated with its reported wet lab temperature stability data. Secondly, three cold stable mutants of Savinase from Bacillus lentus were rationally identified. Lastly, a mutation which will stabilize savinase against oxidative damage was additionally identified.

Keywords: cold stability, molecular dynamics simulations, protein engineering, rational design

Procedia PDF Downloads 140
1258 Applied Canonical Correlation Analysis to Explore the Relationship between Resourcefulness and Quality of Life in Cancer Population

Authors: Chiou-Fang Liou

Abstract:

Cancer has been one of the most life-threaten diseases worldwide for 30+ years. The influences of cancer illness include symptoms from cancer itself along with its treatments. The quality of life among patients diagnosed with cancer during cancer treatments has been conceptualized within four domains: Functional Well-Being, Social Well-Being, Physical Well-Being, and Emotional Well-Being. Patients with cancer often need to make adjustments to face all the challenges. The middle-range theory of Resourcefulness and Quality of life has been applied to explore factors contributing to cancer patients’ needs. Resourcefulness is defined as sets of skills that can be learned and consisted of Person and Social Resourcefulness. Empirical evidence also supported a possible relationship between Resourcefulness and Quality of Life. However, little is known about the extent to which the two concepts are related to each other. This study, therefore, applied a multivariate technique, Canonical Correlation Analysis, to identify the relationship between the two sets of variables with multi-dimensional measures, the Resourcefulness and Quality of Life in Cancer patients receiving treatments. After IRB approval, this multi-centered study took place at two medical centers in the Central Region of Taiwan. Sample A total of 186 patients with various cancer diagnoses and either receiving radiation therapy or chemotherapy consented to and answered questionnaires. The Import findings of the Generalized F test identified two typical sets with several linear relations and explained a total of 79.1% of the total variance. The first typical set found Personal Resourcefulness negatively related to Social Well-being, Functional being, Emotional Well-being, and Physical, in that order. The second typical set found Social Resourcefulness negatively related to Functional Well-being and Physical-being yet positively related to Social Well-being and Emotional Well-being. Discussion and Conclusion, The results of this presented study supported the statistically significant relationship between two sets of variables that are consistent with the theory. In addition, the results are considerably important in cancer patients receiving cancer treatments.

Keywords: cancer, canonical correlation analysis, quality of life, resourcefulness

Procedia PDF Downloads 83
1257 On Modeling Data Sets by Means of a Modified Saddlepoint Approximation

Authors: Serge B. Provost, Yishan Zhang

Abstract:

A moment-based adjustment to the saddlepoint approximation is introduced in the context of density estimation. First applied to univariate distributions, this methodology is extended to the bivariate case. It then entails estimating the density function associated with each marginal distribution by means of the saddlepoint approximation and applying a bivariate adjustment to the product of the resulting density estimates. The connection to the distribution of empirical copulas will be pointed out. As well, a novel approach is proposed for estimating the support of distribution. As these results solely rely on sample moments and empirical cumulant-generating functions, they are particularly well suited for modeling massive data sets. Several illustrative applications will be presented.

Keywords: empirical cumulant-generating function, endpoints identification, saddlepoint approximation, sample moments, density estimation

Procedia PDF Downloads 162
1256 Investigation of the Effects of Simple Heating Processes on the Crystallization of Bi₂WO₆

Authors: Cisil Gulumser, Francesc Medina, Sevil Veli

Abstract:

In this study, the synthesis of photocatalytic Bi₂WO₆ was practiced with simple heating processes and the effects of these treatments on the production of the desired compound were investigated. For this purpose, experiments with Bi(NO₃)₃.5H₂O and H₂WO₄ precursors were carried out to synthesize Bi₂WO₆ by four different combinations. These four combinations were grouped in two main sets as ‘treated in microwave reactor’ and ‘directly filtrated’; additionally these main sets were grouped into two subsets as ‘calcined’ and ‘not calcined’. Calcination processes were conducted at temperatures of 400ᵒC, 600ᵒC, and 800ᵒC. X-ray diffraction (XRD) and environmental scanning electron microscopy (ESEM) analyses were performed in order to investigate the crystal structure of powdered product synthesized with each combination. The highest crystallization of produced compounds was observed for calcination at 600ᵒC from each main group.

Keywords: bismuth tungstate, crystallization, microwave, photocatalysts

Procedia PDF Downloads 175
1255 Using Combination of Sets of Features of Molecules for Aqueous Solubility Prediction: A Random Forest Model

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Generally, absorption and bioavailability increase if solubility increases; therefore, it is crucial to predict them in drug discovery applications. Molecular descriptors and Molecular properties are traditionally used for the prediction of water solubility. There are various key descriptors that are used for this purpose, namely Drogan Descriptors, Morgan Descriptors, Maccs keys, etc., and each has different prediction capabilities with differentiating successes between different data sets. Another source for the prediction of solubility is structural features; they are commonly used for the prediction of solubility. However, there are little to no studies that combine three or more properties or descriptors for prediction to produce a more powerful prediction model. Unlike available models, we used a combination of those features in a random forest machine learning model for improved solubility prediction to better predict and, therefore, contribute to drug discovery systems.

Keywords: solubility, random forest, molecular descriptors, maccs keys

Procedia PDF Downloads 46
1254 Economic Valuation of Forest Landscape Function Using a Conditional Logit Model

Authors: A. J. Julius, E. Imoagene, O. A. Ganiyu

Abstract:

The purpose of this study is to estimate the economic value of the services and functions rendered by the forest landscape using a conditional logit model. For this study, attributes and levels of forest landscape were chosen; specifically, attributes include topographical forest type, forest type, forest density, recreational factor (side trip, accessibility of valley), and willingness to participate (WTP). Based on these factors, 48 choices sets with balanced and orthogonal form using statistical analysis system (SAS) 9.1 was adopted. The efficiency of the questionnaire was 6.02 (D-Error. 0.1), and choice set and socio-economic variables were analyzed. To reduce the cognitive load of respondents, the 48 choice sets were divided into 4 types in the questionnaire, so that respondents could respond to 12 choice sets, respectively. The study populations were citizens from seven metropolitan cities including Ibadan, Ilorin, Osogbo, etc. and annual WTP per household was asked by using the interview questionnaire, a total of 267 copies were recovered. As a result, Oshogbo had 0.45, and the statistical similarities could not be found except for urban forests, forest density, recreational factor, and level of WTP. Average annual WTP per household for forest landscape was 104,758 Naira (Nigerian currency) based on the outcome from this model, total economic value of the services and functions enjoyed from Nigerian forest landscape has reached approximately 1.6 trillion Naira.

Keywords: economic valuation, urban cities, services, forest landscape, logit model, nigeria

Procedia PDF Downloads 132
1253 A Sociocybernetics Data Analysis Using Causality in Tourism Networks

Authors: M. Lloret-Climent, J. Nescolarde-Selva

Abstract:

The aim of this paper is to propose a mathematical model to determine invariant sets, set covering, orbits and, in particular, attractors in the set of tourism variables. Analysis was carried out based on a pre-designed algorithm and applying our interpretation of chaos theory developed in the context of General Systems Theory. This article sets out the causal relationships associated with tourist flows in order to enable the formulation of appropriate strategies. Our results can be applied to numerous cases. For example, in the analysis of tourist flows, these findings can be used to determine whether the behaviour of certain groups affects that of other groups and to analyse tourist behaviour in terms of the most relevant variables. Unlike statistical analyses that merely provide information on current data, our method uses orbit analysis to forecast, if attractors are found, the behaviour of tourist variables in the immediate future.

Keywords: attractor, invariant set, tourist flows, orbits, social responsibility, tourism, tourist variables

Procedia PDF Downloads 508
1252 Integration of Microarray Data into a Genome-Scale Metabolic Model to Study Flux Distribution after Gene Knockout

Authors: Mona Heydari, Ehsan Motamedian, Seyed Abbas Shojaosadati

Abstract:

Prediction of perturbations after genetic manipulation (especially gene knockout) is one of the important challenges in systems biology. In this paper, a new algorithm is introduced that integrates microarray data into the metabolic model. The algorithm was used to study the change in the cell phenotype after knockout of Gss gene in Escherichia coli BW25113. Algorithm implementation indicated that gene deletion resulted in more activation of the metabolic network. Growth yield was more and less regulating gene were identified for mutant in comparison with the wild-type strain.

Keywords: metabolic network, gene knockout, flux balance analysis, microarray data, integration

Procedia PDF Downloads 579
1251 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques

Authors: Tomas Trainys, Algimantas Venckauskas

Abstract:

Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.

Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.

Procedia PDF Downloads 150
1250 Training a Neural Network Using Input Dropout with Aggressive Reweighting (IDAR) on Datasets with Many Useless Features

Authors: Stylianos Kampakis

Abstract:

This paper presents a new algorithm for neural networks called “Input Dropout with Aggressive Re-weighting” (IDAR) aimed specifically at datasets with many useless features. IDAR combines two techniques (dropout of input neurons and aggressive re weighting) in order to eliminate the influence of noisy features. The technique can be seen as a generalization of dropout. The algorithm is tested on two different benchmark data sets: a noisy version of the iris dataset and the MADELON data set. Its performance is compared against three other popular techniques for dealing with useless features: L2 regularization, LASSO and random forests. The results demonstrate that IDAR can be an effective technique for handling data sets with many useless features.

Keywords: neural networks, feature selection, regularization, aggressive reweighting

Procedia PDF Downloads 455
1249 Liver Lesion Extraction with Fuzzy Thresholding in Contrast Enhanced Ultrasound Images

Authors: Abder-Rahman Ali, Adélaïde Albouy-Kissi, Manuel Grand-Brochier, Viviane Ladan-Marcus, Christine Hoeffl, Claude Marcus, Antoine Vacavant, Jean-Yves Boire

Abstract:

In this paper, we present a new segmentation approach for focal liver lesions in contrast enhanced ultrasound imaging. This approach, based on a two-cluster Fuzzy C-Means methodology, considers type-II fuzzy sets to handle uncertainty due to the image modality (presence of speckle noise, low contrast, etc.), and to calculate the optimum inter-cluster threshold. Fine boundaries are detected by a local recursive merging of ambiguous pixels. The method has been tested on a representative database. Compared to both Otsu and type-I Fuzzy C-Means techniques, the proposed method significantly reduces the segmentation errors.

Keywords: defuzzification, fuzzy clustering, image segmentation, type-II fuzzy sets

Procedia PDF Downloads 485
1248 A Similarity Measure for Classification and Clustering in Image Based Medical and Text Based Banking Applications

Authors: K. P. Sandesh, M. H. Suman

Abstract:

Text processing plays an important role in information retrieval, data-mining, and web search. Measuring the similarity between the documents is an important operation in the text processing field. In this project, a new similarity measure is proposed. To compute the similarity between two documents with respect to a feature the proposed measure takes the following three cases into account: (1) The feature appears in both documents; (2) The feature appears in only one document and; (3) The feature appears in none of the documents. The proposed measure is extended to gauge the similarity between two sets of documents. The effectiveness of our measure is evaluated on several real-world data sets for text classification and clustering problems, especially in banking and health sectors. The results show that the performance obtained by the proposed measure is better than that achieved by the other measures.

Keywords: document classification, document clustering, entropy, accuracy, classifiers, clustering algorithms

Procedia PDF Downloads 518
1247 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 122
1246 Homomorphic Conceptual Framework for Effective Supply Chain Strategy (HCEFSC) within Operational Research (OR) with Sustainability and Phenomenology

Authors: Hussain Abdullah Al-Salamin, Elias Ogutu Azariah Tembe

Abstract:

Supply chain (SC) is an operational research (OR) approach and technique which acts as catalyst within central nervous system of business today. Without SC, any type of business is at doldrums, hence entropy. SC is the lifeblood of business today because it is the pivotal hub which provides imperative competitive advantage. The paper present a conceptual framework dubbed as Homomorphic Conceptual Framework for Effective Supply Chain Strategy (HCEFSC).The term homomorphic is derived from abstract algebraic mathematical term homomorphism (same shape) which also embeds the following mathematical application sets: monomorphism, isomorphism, automorphisms, and endomorphism. The HCFESC is intertwined and integrated with wide and broad sets of elements.

Keywords: homomorphism, isomorphism, monomorphisms, automorphisms, epimorphisms, endomorphism, supply chain, operational research (OR)

Procedia PDF Downloads 372
1245 Effects of the Different Recovery Durations on Some Physiological Parameters during 3 X 3 Small-Sided Games in Soccer

Authors: Samet Aktaş, Nurtekin Erkmen, Faruk Guven, Halil Taskin

Abstract:

This study aimed to determine the effects of 3 versus 3 small-sided games (SSG) with different recovery times on soma physiological parameters in soccer players. Twelve soccer players from Regional Amateur League volunteered for this study (mean±SD age, 20.50±2.43 years; height, 177.73±4.13 cm; weight, 70.83±8.38 kg). Subjects were performing soccer training for five days per week. The protocol of the study was approved by the local ethic committee in School of Physical Education and Sport, Selcuk University. The subjects were divided into teams with 3 players according to Yo-Yo Intermittent Recovery Test. The field dimension was 26 m wide and 34 m in length. Subjects performed two times in a random order a series of 3 bouts of 3-a-side SSGs with 3 min and 5 min recovery durations. In SSGs, each set were performed with 6 min duration. The percent of maximal heart rate (% HRmax), blood lactate concentration (LA) and Rated Perceived Exertion (RPE) scale points were collected before the SSGs and at the end of each set. Data were analyzed by analysis of variance (ANOVA) with repeated measures. Significant differences were found between %HRmax in before SSG and 1st set, 2nd set, and 3rd set in both SSG with 3 min recovery duration and SSG with 5 min recovery duration (p<0.05). Means of %HRmax in SSG with 3 min recovery duration at both 1st and 2nd sets were significantly higher than SSG with 5 min recovery duration (p<0.05). No significant difference was found between sets of either SSGs in terms of LA (p>0.05). LA in SSG with 3 min recovery duration was higher than SSG with 5 min recovery duration at 2nd sets (p<0.05). RPE in soccer players was not different between SSGs (p>0.05).In conclusion, this study demonstrates that exercise intensity in SSG with 3 min recovery durations is higher than SSG with 5 min recovery durations.

Keywords: small-sided games, soccer, heart rate, lactate

Procedia PDF Downloads 465
1244 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining

Procedia PDF Downloads 173
1243 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 285
1242 Generalized Rough Sets Applied to Graphs Related to Urban Problems

Authors: Mihai Rebenciuc, Simona Mihaela Bibic

Abstract:

Branch of modern mathematics, graphs represent instruments for optimization and solving practical applications in various fields such as economic networks, engineering, network optimization, the geometry of social action, generally, complex systems including contemporary urban problems (path or transport efficiencies, biourbanism, & c.). In this paper is studied the interconnection of some urban network, which can lead to a simulation problem of a digraph through another digraph. The simulation is made univoc or more general multivoc. The concepts of fragment and atom are very useful in the study of connectivity in the digraph that is simulation - including an alternative evaluation of k- connectivity. Rough set approach in (bi)digraph which is proposed in premier in this paper contribute to improved significantly the evaluation of k-connectivity. This rough set approach is based on generalized rough sets - basic facts are presented in this paper.

Keywords: (bi)digraphs, rough set theory, systems of interacting agents, complex systems

Procedia PDF Downloads 243
1241 The Effects of Passive and Active Recoveries on Responses of Platelet Indices and Hemodynamic Variables to Resistance Exercise

Authors: Mohammad Soltani, Sajad Ahmadizad, Fatemeh Hoseinzadeh, Atefe Sarvestan

Abstract:

The exercise recovery is an important variable in designing resistance exercise training. This study determined the effects of passive and active recoveries on responses of platelet indices and hemodynamic variables to resistance exercise. Twelve healthy subjects (six men and six women, age, 25.4 ±2.5 yrs) performed two types of resistance exercise protocols (six exercises including upper- and lower-body parts) at two separate sessions with one-week intervening. First resistance protocol included three sets of six repetitions at 80% of 1RM with 2 min passive rest between sets and exercises; while, the second protocol included three sets of six repetitions at 60% of 1RM followed by active recovery included six repetitions of the same exercise at 20% of 1RM. The exercise volume was equalized. Three blood samples were taken before exercise, immediately after exercise and after 1-hour recovery, and analyzed for fibrinogen and platelet indices. Blood pressure (BP), heart rate (HR) and rate pressure product (RPP), were measured before, immediately after exercise and every 5 minutes during recovery. Data analyzes showed a significant increase in SBP (systolic blood pressure), HR, rate of pressure product (RPP) and PLT in response to resistance exercise (P<0.05) and that changes for HR and RPP were significantly different between two protocols (P<0.05). Furthermore, MPV and P_LCR did not change in response to resistance exercise, though significant reductions were observed after 1h recovery compared to before and after exercise (P<0.05). No significant changes in fibrinogen and PDW following two types of resistance exercise protocols were observed (P>0.05). On the other hand, no significant differences in platelet indices were found between the two protocols (P>0.05). Resistance exercise induces changes in platelet indices and hemodynamic variables, and that these changes are not related to the type of recovery and returned to normal levels after 1h recovery.

Keywords: hemodynamic variables, platelet indices, resistance exercise, recovery intensity

Procedia PDF Downloads 142
1240 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 259
1239 Acute Effects of Local Vibration on Muscle Activation, Metabolic and Hormone Responses

Authors: Zong Yan Cai, Wen-Chyuan Chen, Chih-Min Wu

Abstract:

The purpose of this study was to investigate the acute effects of local vibration on muscle activation, metabolic and hormone responses. Totally 12 healthy, physically inactive, male adults participated in this study and completed LV exercise session. During LV exercise session, four custom-made vibrations (diameter: 20 mm; thickness: 8 mm; weight: 0.022 g) were locally placed over the belly of the thigh of each subject’s non-dominant leg in supine lying position, and subjects received 10 sets for 1 min at the frequency of 35-40Hz, with 1–2 min of rest between sets. The surface electromyography (EMG) were obtained from the vastus medialis and rectus femoris, and the subjects’ rating of perceived exertion (RPE) and heart rate (HR) were measured. EMG data, RPE values as well as HR were obtained by averaging the results of 10 sets of each exercise session. Blood samples were drawn before exercise, immediately after exercise, and 15min and 30min after exercise in each session for analysis of lactic acid (LA), growth hormone (GH), testosterone (T) and cortisol (C). The results indicated that the HR did not increase after LV (63.18±3.5 to 63.25±2.58 beat/min, p > 0.05). The average RPE values during the LV exposure were at 2.86±0.39. The root mean square % EMG values from the vastus medialis and rectus femoris were 19.02±2.19 and 8.25±2.20 respectively. There were no significant differences after acute LV exercise among LA, GH and T values as compared with baseline values (LA: 0.68±0.11 to 0.7±0.1 mmol/L; GH: 0.06±0.05 to 0.57±0.27 ng/mL; T: 551.33±46.62 to 520.42±43.78 ng/dL, p>0.05). However, the LV treatment caused a significant decrease in C values after exercise (16.56±1.05 to 11.64±1.85 nmol/L, p<0.05). In conclusion, acute LV exercise only slightly increase muscle activation which may not cause effective exercise response. However, acute LV exercise reduces C level, which may reduce the catabolic response. The probable reason might partly due to the vibration rhythmically which massage on muscles.

Keywords: cortisol, growth hormone, lactic acid, testosterone

Procedia PDF Downloads 270
1238 Modified InVEST for Whatsapp Messages Forensic Triage and Search through Visualization

Authors: Agria Rhamdhan

Abstract:

WhatsApp as the most popular mobile messaging app has been used as evidence in many criminal cases. As the use of mobile messages generates large amounts of data, forensic investigation faces the challenge of large data problems. The hardest part of finding this important evidence is because current practice utilizes tools and technique that require manual analysis to check all messages. That way, analyze large sets of mobile messaging data will take a lot of time and effort. Our work offers methodologies based on forensic triage to reduce large data to manageable sets resulting easier to do detailed reviews, then show the results through interactive visualization to show important term, entities and relationship through intelligent ranking using Term Frequency-Inverse Document Frequency (TF-IDF) and Latent Dirichlet Allocation (LDA) Model. By implementing this methodology, investigators can improve investigation processing time and result's accuracy.

Keywords: forensics, triage, visualization, WhatsApp

Procedia PDF Downloads 168
1237 Summarizing Data Sets for Data Mining by Using Statistical Methods in Coastal Engineering

Authors: Yunus Doğan, Ahmet Durap

Abstract:

Coastal regions are the one of the most commonly used places by the natural balance and the growing population. In coastal engineering, the most valuable data is wave behaviors. The amount of this data becomes very big because of observations that take place for periods of hours, days and months. In this study, some statistical methods such as the wave spectrum analysis methods and the standard statistical methods have been used. The goal of this study is the discovery profiles of the different coast areas by using these statistical methods, and thus, obtaining an instance based data set from the big data to analysis by using data mining algorithms. In the experimental studies, the six sample data sets about the wave behaviors obtained by 20 minutes of observations from Mersin Bay in Turkey and converted to an instance based form, while different clustering techniques in data mining algorithms were used to discover similar coastal places. Moreover, this study discusses that this summarization approach can be used in other branches collecting big data such as medicine.

Keywords: clustering algorithms, coastal engineering, data mining, data summarization, statistical methods

Procedia PDF Downloads 361
1236 Fault Detection and Isolation of a Three-Tank System using Analytical Temporal Redundancy, Parity Space/Relation Based Residual Generation

Authors: A. T. Kuda, J. J. Dayya, A. Jimoh

Abstract:

This paper investigates the fault detection and Isolation technique of measurement data sets from a three tank system using analytical model-based temporal redundancy which is based on residual generation using parity equations/space approach. It further briefly outlines other approaches of model-based residual generation. The basic idea of parity space residual generation in temporal redundancy is dynamic relationship between sensor outputs and actuator inputs (input-output model). These residuals where then used to detect whether or not the system is faulty and indicate the location of the fault when it is faulty. The method obtains good results by detecting and isolating faults from the considered data sets measurements generated from the system.

Keywords: fault detection, fault isolation, disturbing influences, system failure, parity equation/relation, structured parity equations

Procedia PDF Downloads 302
1235 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: instance selection, data reduction, MapReduce, kNN

Procedia PDF Downloads 253
1234 Spectral Anomaly Detection and Clustering in Radiological Search

Authors: Thomas L. McCullough, John D. Hague, Marylesa M. Howard, Matthew K. Kiser, Michael A. Mazur, Lance K. McLean, Johanna L. Turk

Abstract:

Radiological search and mapping depends on the successful recognition of anomalies in large data sets which contain varied and dynamic backgrounds. We present a new algorithmic approach for real-time anomaly detection which is resistant to common detector imperfections, avoids the limitations of a source template library and provides immediate, and easily interpretable, user feedback. This algorithm is based on a continuous wavelet transform for variance reduction and evaluates the deviation between a foreground measurement and a local background expectation using methods from linear algebra. We also present a technique for recognizing and visualizing spectrally similar clusters of data. This technique uses Laplacian Eigenmap Manifold Learning to perform dimensional reduction which preserves the geometric "closeness" of the data while maintaining sensitivity to outlying data. We illustrate the utility of both techniques on real-world data sets.

Keywords: radiological search, radiological mapping, radioactivity, radiation protection

Procedia PDF Downloads 694
1233 3D Point Cloud Model Color Adjustment by Combining Terrestrial Laser Scanner and Close Range Photogrammetry Datasets

Authors: M. Pepe, S. Ackermann, L. Fregonese, C. Achille

Abstract:

3D models obtained with advanced survey techniques such as close-range photogrammetry and laser scanner are nowadays particularly appreciated in Cultural Heritage and Archaeology fields. In order to produce high quality models representing archaeological evidences and anthropological artifacts, the appearance of the model (i.e. color) beyond the geometric accuracy, is not a negligible aspect. The integration of the close-range photogrammetry survey techniques with the laser scanner is still a topic of study and research. By combining point cloud data sets of the same object generated with both technologies, or with the same technology but registered in different moment and/or natural light condition, could construct a final point cloud with accentuated color dissimilarities. In this paper, a methodology to uniform the different data sets, to improve the chromatic quality and to highlight further details by balancing the point color will be presented.

Keywords: color models, cultural heritage, laser scanner, photogrammetry

Procedia PDF Downloads 280