Search results for: microarray data analysis.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13523

Search results for: microarray data analysis.

13103 An Empirical Mode Decomposition Based Method for Action Potential Detection in Neural Raw Data

Authors: Sajjad Farashi, Mohammadjavad Abolhassani, Mostafa Taghavi Kani

Abstract:

Information in the nervous system is coded as firing patterns of electrical signals called action potential or spike so an essential step in analysis of neural mechanism is detection of action potentials embedded in the neural data. There are several methods proposed in the literature for such a purpose. In this paper a novel method based on empirical mode decomposition (EMD) has been developed. EMD is a decomposition method that extracts oscillations with different frequency range in a waveform. The method is adaptive and no a-priori knowledge about data or parameter adjusting is needed in it. The results for simulated data indicate that proposed method is comparable with wavelet based methods for spike detection. For neural signals with signal-to-noise ratio near 3 proposed methods is capable to detect more than 95% of action potentials accurately.

Keywords: EMD, neural data processing, spike detection, wavelet decomposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2374
13102 Innovation in Traditional Game: A Case Study of Trainee Teachers' Learning Experiences

Authors: Malathi Balakrishnan, Cheng Lee Ooi, Chander Vengadasalam

Abstract:

The purpose of this study is to explore a case study of trainee teachers’ learning experience on innovating traditional games during the traditional game carnival. It explores issues arising from multiple case studies of trainee teachers learning experiences in innovating traditional games. A qualitative methodology was adopted through observations, semi-structured interviews and reflective journals’ content analysis of trainee teachers’ learning experiences creating and implementing innovative traditional games. Twelve groups of 36 trainee teachers who registered for Sports and Physical Education Management Course were the participants for this research during the traditional game carnival. Semi structured interviews were administrated after the trainee teachers learning experiences in creating innovative traditional games. Reflective journals were collected after carnival day and the content analyzed. Inductive data analysis was used to evaluate various data sources. All the collected data were then evaluated through the Nvivo data analysis process. Inductive reasoning was interpreted based on the Self Determination Theory (SDT). The findings showed that the trainee teachers had positive game participation experiences, game knowledge about traditional games and positive motivation to innovate the game. The data also revealed the influence of themes like cultural significance and creativity. It can be concluded from the findings that the organized game carnival, as a requirement of course work by the Institute of Teacher Training Malaysia, was able to enhance teacher trainers’ innovative thinking skills. The SDT, as a multidimensional approach to motivation, was utilized. Therefore, teacher trainers may have more learning experiences using the SDT.

Keywords: Learning experiences, innovation, traditional games, trainee teachers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2443
13101 Unified, Low-Cost Analysis Framework for the Cycling Situation in Cities

Authors: Joerg Schweizer, Jason N. Meggs, Nazanin R. Dehkordi, Frederico Rupi, Anton Pashkevich

Abstract:

We propose a low-cost uniform analysis framework allowing comparison of the strengths and weaknesses of the bicycling experience within and between cities. A primary component is an expedient, one-page mobility survey from which mode share is calculated. The bicycle mode share of many cities remains unknown, creating a serious barrier for both scientists and policy makers aiming to understand and increase rates of bicycling. Because of its low cost and expedience, this framework could be replicated widely, uniformly filling the data gap. The framework has been applied to 13 Central European cities with success. Data is collected on multiple modes with specific questions regarding both behavior and quality of travel experience. Individual preferences are also collected, examining the conditions under which respondents would change behavior to adopt more sustainable modes (bicycling or public transportation). A broad analysis opportunity results, intended to inform policy choices.

Keywords: bicycling, modal splits, transport policy, surveys.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407
13100 Validity and Reliability of Competency Assessment Implementation (CAI) Instrument Using Rasch Model

Authors: Nurfirdawati Muhamad Hanafi, Azmanirah Ab Rahman, Marina Ibrahim Mukhtar, Jamil Ahmad, Sarebah Warman

Abstract:

This study was conducted to generate empirical evidence on validity and reliability of the item of Competency Assessment Implementation (CAI) Instrument using Rasch Model for polythomous data aided by Winstep software version 3.68. The construct validity was examined by analyzing the point-measure correlation index (PTMEA), infit and outfit MNSQ values; meanwhile the reliability was examined by analyzing item reliability index. A survey technique was used as the major method with the CAI instrument on 156 teachers from vocational schools. The results have shown that the reliability of CAI Instrument items were between 0.80 and 0.98. PTMEA Correlation is in positive values, in which the item is able to distinguish between the ability of the respondent. Statistical data obtained show that out of 154 items, 12 items from the instrument suggested to be omitted. This study is hoped could bring a new direction to the process of data analysis in educational research.

Keywords: Competency Assessment, Reliability, Validity, Item Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2831
13099 Concept for Knowledge out of Sri Lankan Non-State Sector: Performances of Higher Educational Institutes and Successes of Its Sector

Authors: S. Jeyarajan

Abstract:

Concept of knowledge is discovered from conducted study for successive Competition in Sri Lankan Non-State Higher Educational Institutes. The Concept discovered out of collected Knowledge Management Practices from Emerald inside likewise reputed literatures and of Non-State Higher Educational sector. A test is conducted to reveal existences and its reason behind of these collected practices in Sri Lankan Non-State Higher Education Institutes. Further, unavailability of such study and uncertain on number of participants for data collection in the Sri Lankan context contributed selection of research method as qualitative method, which used attributes of Delphi Method to manage those likewise uncertainty. Data are collected under Dramaturgical Method, which contributes efficient usage of the Delphi method. Grounded theory is selected as data analysis techniques, which is conducted in intermixed discourse to manage different perspectives of data that are collected systematically through perspective and modified snowball sampling techniques. Data are then analysed using Grounded Theory Development Techniques in Intermix discourses to manage differences in Data. Consequently, Agreement in the results of Grounded theories and of finding in the Foreign Study is discovered in the analysis whereas present study conducted as Qualitative Research and The Foreign Study conducted as Quantitative Research. As such, the Present study widens the discovery in the Foreign Study. Further, having discovered reason behind of the existences, the Present result shows Concept for Knowledge from Sri Lankan Non-State sector to manage higher educational Institutes in successful manner.

Keywords: Adherence of snowball sampling into perspective sampling, Delphi method in qualitative method, grounded theory development in intermix discourses of analysis, knowledge management for success of higher educational institutes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 772
13098 Wavelet and K-L Seperability Based Feature Extraction Method for Functional Data Classification

Authors: Jun Wan, Zehua Chen, Yingwu Chen, Zhidong Bai

Abstract:

This paper proposes a novel feature extraction method, based on Discrete Wavelet Transform (DWT) and K-L Seperability (KLS), for the classification of Functional Data (FD). This method combines the decorrelation and reduction property of DWT and the additive independence property of KLS, which is helpful to extraction classification features of FD. It is an advanced approach of the popular wavelet based shrinkage method for functional data reduction and classification. A theory analysis is given in the paper to prove the consistent convergence property, and a simulation study is also done to compare the proposed method with the former shrinkage ones. The experiment results show that this method has advantages in improving classification efficiency, precision and robustness.

Keywords: classification, functional data, feature extraction, K-Lseperability, wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1466
13097 Analysis of a Population of Diabetic Patients Databases with Classifiers

Authors: Murat Koklu, Yavuz Unal

Abstract:

Data mining can be called as a technique to extract information from data. It is the process of obtaining hidden information and then turning it into qualified knowledge by statistical and artificial intelligence technique. One of its application areas is medical area to form decision support systems for diagnosis just by inventing meaningful information from given medical data. In this study a decision support system for diagnosis of illness that make use of data mining and three different artificial intelligence classifier algorithms namely Multilayer Perceptron, Naive Bayes Classifier and J.48. Pima Indian dataset of UCI Machine Learning Repository was used. This dataset includes urinary and blood test results of 768 patients. These test results consist of 8 different feature vectors. Obtained classifying results were compared with the previous studies. The suggestions for future studies were presented.

Keywords: Artificial Intelligence, Classifiers, Data Mining, Diabetic Patients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5431
13096 Discovery of Time Series Event Patterns based on Time Constraints from Textual Data

Authors: Shigeaki Sakurai, Ken Ueno, Ryohei Orihara

Abstract:

This paper proposes a method that discovers time series event patterns from textual data with time information. The patterns are composed of sequences of events and each event is extracted from the textual data, where an event is characteristic content included in the textual data such as a company name, an action, and an impression of a customer. The method introduces 7 types of time constraints based on the analysis of the textual data. The method also evaluates these constraints when the frequency of a time series event pattern is calculated. We can flexibly define the time constraints for interesting combinations of events and can discover valid time series event patterns which satisfy these conditions. The paper applies the method to daily business reports collected by a sales force automation system and verifies its effectiveness through numerical experiments.

Keywords: Text mining, sequential mining, time constraints, daily business reports.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
13095 Procedure Model for Data-Driven Decision Support Regarding the Integration of Renewable Energies into Industrial Energy Management

Authors: M. Graus, K. Westhoff, X. Xu

Abstract:

The climate change causes a change in all aspects of society. While the expansion of renewable energies proceeds, industry could not be convinced based on general studies about the potential of demand side management to reinforce smart grid considerations in their operational business. In this article, a procedure model for a case-specific data-driven decision support for industrial energy management based on a holistic data analytics approach is presented. The model is executed on the example of the strategic decision problem, to integrate the aspect of renewable energies into industrial energy management. This question is induced due to considerations of changing the electricity contract model from a standard rate to volatile energy prices corresponding to the energy spot market which is increasingly more affected by renewable energies. The procedure model corresponds to a data analytics process consisting on a data model, analysis, simulation and optimization step. This procedure will help to quantify the potentials of sustainable production concepts based on the data from a factory. The model is validated with data from a printer in analogy to a simple production machine. The overall goal is to establish smart grid principles for industry via the transformation from knowledge-driven to data-driven decisions within manufacturing companies.

Keywords: Data analytics, green production, industrial energy management, optimization, renewable energies, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
13094 Error Analysis of Nonconventional Electrical Moisture-meter under Simplified Conditions

Authors: Kamil Ďurana, Robert Černý

Abstract:

An electrical apparatus for measuring moisture content was developed by our laboratory and uses dependence of electrical properties on water content in studied material. Error analysis of the apparatus was run by measuring different volumes of water in a simplified specimen, i.e. hollow plexiglass block, in order to avoid as many side-effects as possible. Obtained data were processed using both basic and advanced statistics and results were compared with each other. The influence of water content on accuracy of measured data was studied as well as the influence of variation of apparatus' proper arrangement or factual methodics of its usage. The overall coefficient of variation was 4%. There was no trend found in results of error dependence on water content. Comparison with current surveys led to a conclusion, that the studied apparatus can be used for indirect measurement of water content in porous materials, with expectable error and under known conditions. Factual experiments with porous materials are not involved, but are currently under investigation.

Keywords: device, capacitance method, error analysis, moisture meter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1389
13093 Modified Data Mining Approach for Defective Diagnosis in Hard Disk Drive Industry

Authors: S. Soommat, S. Patamatamkul, T. Prempridi, M. Sritulyachot, P. Ineure, S. Yimman

Abstract:

Currently, slider process of Hard Disk Drive Industry become more complex, defective diagnosis for yield improvement becomes more complicated and time-consumed. Manufacturing data analysis with data mining approach is widely used for solving that problem. The existing mining approach from combining of the KMean clustering, the machine oriented Kruskal-Wallis test and the multivariate chart were applied for defective diagnosis but it is still be a semiautomatic diagnosis system. This article aims to modify an algorithm to support an automatic decision for the existing approach. Based on the research framework, the new approach can do an automatic diagnosis and help engineer to find out the defective factors faster than the existing approach about 50%.

Keywords: Slider process, Defective diagnosis and Data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1199
13092 Perception-Oriented Model Driven Development for Designing Data Acquisition Process in Wireless Sensor Networks

Authors: K. Indra Gandhi

Abstract:

Wireless Sensor Networks (WSNs) have always been characterized for application-specific sensing, relaying and collection of information for further analysis. However, software development was not considered as a separate entity in this process of data collection which has posed severe limitations on the software development for WSN. Software development for WSN is a complex process since the components involved are data-driven, network-driven and application-driven in nature. This implies that there is a tremendous need for the separation of concern from the software development perspective. A layered approach for developing data acquisition design based on Model Driven Development (MDD) has been proposed as the sensed data collection process itself varies depending upon the application taken into consideration. This work focuses on the layered view of the data acquisition process so as to ease the software point of development. A metamodel has been proposed that enables reusability and realization of the software development as an adaptable component for WSN systems. Further, observing users perception indicates that proposed model helps in improving the programmer's productivity by realizing the collaborative system involved.

Keywords: Model-driven development, wireless sensor networks, data acquisition, separation of concern, layered design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 957
13091 The Concentration Analysis of CO2 Using ALOHA Code for Kuosheng Nuclear Power Plant

Authors: W. S. Hsu, Y. Chiang, H. C. Chen, J. R. Wang, S. W. Chen, J. H. Yang, C. Shih

Abstract:

Not only radiation materials, but also the normal chemical material stored in the power plant can cause a risk to the residents. In this research, the ALOHA code was used to perform the concentration analysis under the CO2 storage burst or leakage conditions for Kuosheng nuclear power plant (NPP). The Final Safety Analysis Report (FSAR) and data were used in this study. Additionally, the analysis results of ALOHA code were compared with the R.G. 1.78 failure criteria in order to confirm the control room habitability. The comparison results show that the ALOHA result for burst case was 0.923 g/m3 which was below the criteria. However, the ALOHA results for leakage case was 11.3 g/m3.

Keywords: BWR, ALOHA, habitability, Kuosheng.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 966
13090 CFD Analysis of Two Phase Flow in a Horizontal Pipe – Prediction of Pressure Drop

Authors: P. Bhramara, V. D. Rao, K. V. Sharma , T. K. K. Reddy

Abstract:

In designing of condensers, the prediction of pressure drop is as important as the prediction of heat transfer coefficient. Modeling of two phase flow, particularly liquid – vapor flow under diabatic conditions inside a horizontal tube using CFD analysis is difficult with the available two phase models in FLUENT due to continuously changing flow patterns. In the present analysis, CFD analysis of two phase flow of refrigerants inside a horizontal tube of inner diameter, 0.0085 m and 1.2 m length is carried out using homogeneous model under adiabatic conditions. The refrigerants considered are R22, R134a and R407C. The analysis is performed at different saturation temperatures and at different flow rates to evaluate the local frictional pressure drop. Using Homogeneous model, average properties are obtained for each of the refrigerants that is considered as single phase pseudo fluid. The so obtained pressure drop data is compared with the separated flow models available in literature.

Keywords: Adiabatic conditions, CFD analysis, Homogeneousmodel and Liquid – Vapor flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3697
13089 Spatially Random Sampling for Retail Food Risk Factors Study

Authors: Guilan Huang

Abstract:

In 2013 and 2014, the U.S. Food and Drug Administration (FDA) collected data from selected fast food restaurants and full service restaurants for tracking changes in the occurrence of foodborne illness risk factors. This paper discussed how we customized spatial random sampling method by considering financial position and availability of FDA resources, and how we enriched restaurants data with location. Location information of restaurants provides opportunity for quantitatively determining random sampling within non-government units (e.g.: 240 kilometers around each data-collector). Spatial analysis also could optimize data-collectors’ work plans and resource allocation. Spatial analytic and processing platform helped us handling the spatial random sampling challenges. Our method fits in FDA’s ability to pinpoint features of foodservice establishments, and reduced both time and expense on data collection.

Keywords: Geospatial technology, restaurant, retail food risk factors study, spatial random sampling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
13088 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: Data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 516
13087 Quantifying and Adjusting the Effects of Publication Bias in Continuous Meta-Analysis

Authors: N.R.N. Idris

Abstract:

This study uses simulated meta-analysis to assess the effects of publication bias on meta-analysis estimates and to evaluate the efficacy of the trim and fill method in adjusting for these biases. The estimated effect sizes and the standard error were evaluated in terms of the statistical bias and the coverage probability. The results demonstrate that if publication bias is not adjusted it could lead to up to 40% bias in the treatment effect estimates. Utilization of the trim and fill method could reduce the bias in the overall estimate by more than half. The method is optimum in presence of moderate underlying bias but has minimal effects in presence of low and severe publication bias. Additionally, the trim and fill method improves the coverage probability by more than half when subjected to the same level of publication bias as those of the unadjusted data. The method however tends to produce false positive results and will incorrectly adjust the data for publication bias up to 45 % of the time. Nonetheless, the bias introduced into the estimates due to this adjustment is minimal

Keywords: Publication bias, Trim and Fill method, percentage relative bias, coverage probability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1558
13086 Computational Aspects of Regression Analysis of Interval Data

Authors: Michal Cerny

Abstract:

We consider linear regression models where both input data (the values of independent variables) and output data (the observations of the dependent variable) are interval-censored. We introduce a possibilistic generalization of the least squares estimator, so called OLS-set for the interval model. This set captures the impact of the loss of information on the OLS estimator caused by interval censoring and provides a tool for quantification of this effect. We study complexity-theoretic properties of the OLS-set. We also deal with restricted versions of the general interval linear regression model, in particular the crisp input – interval output model. We give an argument that natural descriptions of the OLS-set in the crisp input – interval output cannot be computed in polynomial time. Then we derive easily computable approximations for the OLS-set which can be used instead of the exact description. We illustrate the approach by an example.

Keywords: Linear regression, interval-censored data, computational complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470
13085 Efficiency Evaluation of E-Commerce Websites

Authors: A. K. Abd El-Aleem, W. F. Abd El-wahed, N. A. Ismail, F. A. Torkey

Abstract:

This study suggests a model of a new set of evaluation criteria that will be used to measure the efficiency of real-world E-commerce websites. Evaluation criteria include design, usability and performance for websites, the Data Envelopment Analysis (DEA) technique has been used to measure the websites efficiency. An efficient Web site is defined as a site that generates the most outputs, using the smallest amount of inputs. Inputs refer to measurements representing the amount of effort required to build, maintain and perform the site. Output is amount of traffic the site generates. These outputs are measured as the average number of daily hits and the average number of daily unique visitors.

Keywords: Data Envelopment Analysis, E-commerce, Efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4105
13084 Reliability Analysis of Press Unit using Vague Set

Authors: S. P. Sharma, Monica Rani

Abstract:

In conventional reliability assessment, the reliability data of system components are treated as crisp values. The collected data have some uncertainties due to errors by human beings/machines or any other sources. These uncertainty factors will limit the understanding of system component failure due to the reason of incomplete data. In these situations, we need to generalize classical methods to fuzzy environment for studying and analyzing the systems of interest. Fuzzy set theory has been proposed to handle such vagueness by generalizing the notion of membership in a set. Essentially, in a Fuzzy Set (FS) each element is associated with a point-value selected from the unit interval [0, 1], which is termed as the grade of membership in the set. A Vague Set (VS), as well as an Intuitionistic Fuzzy Set (IFS), is a further generalization of an FS. Instead of using point-based membership as in FS, interval-based membership is used in VS. The interval-based membership in VS is more expressive in capturing vagueness of data. In the present paper, vague set theory coupled with conventional Lambda-Tau method is presented for reliability analysis of repairable systems. The methodology uses Petri nets (PN) to model the system instead of fault tree because it allows efficient simultaneous generation of minimal cuts and path sets. The presented method is illustrated with the press unit of the paper mill.

Keywords: Lambda -Tau methodology, Petri nets, repairable system, vague fuzzy set.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
13083 Analytical Modelling of Surface Roughness during Compacted Graphite Iron Milling Using Ceramic Inserts

Authors: S. Karabulut, A. Güllü, A. Güldas, R. Gürbüz

Abstract:

This study investigates the effects of the lead angle and chip thickness variation on surface roughness during the machining of compacted graphite iron using ceramic cutting tools under dry cutting conditions. Analytical models were developed for predicting the surface roughness values of the specimens after the face milling process. Experimental data was collected and imported to the artificial neural network model. A multilayer perceptron model was used with the back propagation algorithm employing the input parameters of lead angle, cutting speed and feed rate in connection with chip thickness. Furthermore, analysis of variance was employed to determine the effects of the cutting parameters on surface roughness. Artificial neural network and regression analysis were used to predict surface roughness. The values thus predicted were compared with the collected experimental data, and the corresponding percentage error was computed. Analysis results revealed that the lead angle is the dominant factor affecting surface roughness. Experimental results indicated an improvement in the surface roughness value with decreasing lead angle value from 88° to 45°.

Keywords: CGI, milling, surface roughness, ANN, regression, modeling, analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1969
13082 Cloud Computing Support for Diagnosing Researches

Authors: A. Amirov, O. Gerget, V. Kochegurov

Abstract:

One of the main biomedical problem lies in detecting dependencies in semi structured data. Solution includes biomedical portal and algorithms (integral rating health criteria, multidimensional data visualization methods). Biomedical portal allows to process diagnostic and research data in parallel mode using Microsoft System Center 2012, Windows HPC Server cloud technologies. Service does not allow user to see internal calculations instead it provides practical interface. When data is sent for processing user may track status of task and will achieve results as soon as computation is completed. Service includes own algorithms and allows diagnosing and predicating medical cases. Approved methods are based on complex system entropy methods, algorithms for determining the energy patterns of development and trajectory models of biological systems and logical–probabilistic approach with the blurring of images.

Keywords: Biomedical portal, cloud computing, diagnostic and prognostic research, mathematical data analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
13081 Laban Movement Analysis Using Kinect

Authors: Ran Bernstein, Tal Shafir, Rachelle Tsachor, Karen Studd, Assaf Schuster

Abstract:

Laban Movement Analysis (LMA), developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Many applications that use motion capture data might be significantly leveraged if the Laban qualities will be recognized automatically. This paper presents an automated recognition method of Laban qualities from motion capture skeletal recordings and it is demonstrated on the output of Microsoft’s Kinect V2 sensor.

Keywords: Laban Movement Analysis, Kinect, Machine Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2832
13080 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan

Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid

Abstract:

In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.

Keywords: Data quality, null hypothesis, seismic lines, seismic reflection survey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 615
13079 Missing Link Data Estimation with Recurrent Neural Network: An Application Using Speed Data of Daegu Metropolitan Area

Authors: JaeHwan Yang, Da-Woon Jeong, Seung-Young Kho, Dong-Kyu Kim

Abstract:

In terms of ITS, information on link characteristic is an essential factor for plan or operation. But in practical cases, not every link has installed sensors on it. The link that does not have data on it is called “Missing Link”. The purpose of this study is to impute data of these missing links. To get these data, this study applies the machine learning method. With the machine learning process, especially for the deep learning process, missing link data can be estimated from present link data. For deep learning process, this study uses “Recurrent Neural Network” to take time-series data of road. As input data, Dedicated Short-range Communications (DSRC) data of Dalgubul-daero of Daegu Metropolitan Area had been fed into the learning process. Neural Network structure has 17 links with present data as input, 2 hidden layers, for 1 missing link data. As a result, forecasted data of target link show about 94% of accuracy compared with actual data.

Keywords: Data Estimation, link data, machine learning, road network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504
13078 Health Monitoring of Power Transformers by Dissolved Gas Analysis using Regression Method and Study the Effect of Filtration on Oil

Authors: Anjali Chatterjee, Nirmal Kumar Roy

Abstract:

Economically transformers constitute one of the largest investments in a Power system. For this reason, transformer condition assessment and management is a high priority task. If a transformer fails, it would have a significant negative impact on revenue and service reliability. Monitoring the state of health of power transformers has traditionally been carried out using laboratory Dissolved Gas Analysis (DGA) tests performed at periodic intervals on the oil sample, collected from the transformers. DGA of transformer oil is the single best indicator of a transformer-s overall condition and is a universal practice today, which started somewhere in the 1960s. Failure can occur in a transformer due to different reasons. Some failures can be limited or prevented by maintenance. Oil filtration is one of the methods to remove the dissolve gases and prevent the deterioration of the oil. In this paper we analysis the DGA data by regression method and predict the gas concentration in the oil in the future. We bring about a comparative study of different traditional methods of regression and the errors generated out of their predictions. With the help of these data we can deduce the health of the transformer by finding the type of fault if it has occurred or will occur in future. Additional in this paper effect of filtration on the transformer health is highlight by calculating the probability of failure of a transformer with and without oil filtrating.

Keywords: Power Transformers, Dissolve gas Analysis, Regression method, Filtration, oil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2943
13077 Application of a New Hybrid Optimization Algorithm on Cluster Analysis

Authors: T. Niknam, M. Nayeripour, B.Bahmani Firouzi

Abstract:

Clustering techniques have received attention in many areas including engineering, medicine, biology and data mining. The purpose of clustering is to group together data points, which are close to one another. The K-means algorithm is one of the most widely used techniques for clustering. However, K-means has two shortcomings: dependency on the initial state and convergence to local optima and global solutions of large problems cannot found with reasonable amount of computation effort. In order to overcome local optima problem lots of studies done in clustering. This paper is presented an efficient hybrid evolutionary optimization algorithm based on combining Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO), called PSO-ACO, for optimally clustering N object into K clusters. The new PSO-ACO algorithm is tested on several data sets, and its performance is compared with those of ACO, PSO and K-means clustering. The simulation results show that the proposed evolutionary optimization algorithm is robust and suitable for handing data clustering.

Keywords: Ant Colony Optimization (ACO), Data clustering, Hybrid evolutionary optimization algorithm, K-means clustering, Particle Swarm Optimization (PSO).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2198
13076 Performance Optimization of Data Mining Application Using Radial Basis Function Classifier

Authors: M. Govindarajan, R. M.Chandrasekaran

Abstract:

Text data mining is a process of exploratory data analysis. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. This paper describes proposed radial basis function Classifier that performs comparative crossvalidation for existing radial basis function Classifier. The feasibility and the benefits of the proposed approach are demonstrated by means of data mining problem: direct Marketing. Direct marketing has become an important application field of data mining. Comparative Cross-validation involves estimation of accuracy by either stratified k-fold cross-validation or equivalent repeated random subsampling. While the proposed method may have high bias; its performance (accuracy estimation in our case) may be poor due to high variance. Thus the accuracy with proposed radial basis function Classifier was less than with the existing radial basis function Classifier. However there is smaller the improvement in runtime and larger improvement in precision and recall. In the proposed method Classification accuracy and prediction accuracy are determined where the prediction accuracy is comparatively high.

Keywords: Text Data Mining, Comparative Cross-validation, Radial Basis Function, runtime, accuracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554
13075 Research Trend Analysis – A Sample in the Field of Information Systems

Authors: Hei-Chia Wang, Wei-Pin Chiu

Abstract:

As research performance in academia is treated as one of indices for national competency, many countries devote much attention and resources to increasing their research performance. Understand the research trend is the basic step to improve the research performance. The goal of this research is to design an analysis system to evaluate research trends from analyzing data from different countries. In this paper, information system researches in Taiwan and other countries, including Asian countries and prominent countries represented by the Group of Eight (G8) is used as example. Our research found the trends are varied in different countries. Our research suggested that Taiwan-s scholars can pay more attention to interdisciplinary applications and try to increase their collaboration with other countries, in order to increase Taiwan's competency in the area of information science.

Keywords: Bibliometric analysis, research trend, scientometric analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
13074 CNet Module Design of IMCS

Authors: Youkyung Park, SeungYup Kang, SungHo Kim, SimKyun Yook

Abstract:

IMCS is Integrated Monitoring and Control System for thermal power plant. This system consists of mainly two parts; controllers and OIS (Operator Interface System). These two parts are connected by Ethernet-based communication. The controller side of communication is managed by CNet module and OIS side is managed by data server of OIS. CNet module sends the data of controller to data server and receives commend data from data server. To minimizes or balance the load of data server, this module buffers data created by controller at every cycle and send buffered data to data server on request of data server. For multiple data server, this module manages the connection line with each data server and response for each request from multiple data server. CNet module is included in each controller of redundant system. When controller fail-over happens on redundant system, this module can provide data of controller to data sever without loss. This paper presents three main features – separation of get task, usage of ring buffer and monitoring communication status –of CNet module to carry out these functions.

Keywords: Ethernet communication, DCS, power plant, ring buffer, data integrity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563