Search results for: Data Analysis.
13248 Empirical and Indian Automotive Equity Portfolio Decision Support
Authors: P. Sankar, P. James Daniel Paul, Siddhant Sahu
Abstract:
A brief review of the empirical studies on the methodology of the stock market decision support would indicate that they are at a threshold of validating the accuracy of the traditional and the fuzzy, artificial neural network and the decision trees. Many researchers have been attempting to compare these models using various data sets worldwide. However, the research community is on the way to the conclusive confidence in the emerged models. This paper attempts to use the automotive sector stock prices from National Stock Exchange (NSE), India and analyze them for the intra-sectorial support for stock market decisions. The study identifies the significant variables and their lags which affect the price of the stocks using OLS analysis and decision tree classifiers.
Keywords: Indian Automotive Sector, Stock Market Decisions, Equity Portfolio Analysis, Decision Tree Classifiers, Statistical Data Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 203613247 Content Analysis and Attitude of Thai Students towards Thai Series “Hormones: Season 2”
Authors: Siriporn Meenanan
Abstract:
The objective of this study is to investigate the attitude of Thai students towards the Thai series "Hormones the Series Season 2". This study was conducted in the quantitative research, and the questionnaires were used to collect data from 400 people of the sample group. Descriptive statistics were used in data analysis. The findings reveal that most participants have positive comments regarding the series. They strongly agreed that the series reflects on the way of life and problems of teenagers in Thailand. Hence, the participants believe that if adults have a chance to watch the series, they will have the better understanding of the teenagers. In addition, the participants also agreed that the contents of the play are appropriate and satisfiable as the contents of “Hormones the Series Season 2” will raise awareness among the teens and use it as a guide to prevent problems that might happen during their teenage life.
Keywords: Content analysis, attitude, Thai series, Hormones the series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 95713246 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data
Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu
Abstract:
Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.
Keywords: Food waste reduction, particle filter, point of sales, sustainable development goals, Taylor's Law, time series analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87113245 An Automation of Check Focusing on CRUD for Requirements Analysis Model in UML
Authors: Shinpei Ogata, Yoshitaka Aoki, Hirotaka Okuda, Saeko Matsuura
Abstract:
A key to success of high quality software development is to define valid and feasible requirements specification. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface mock-up from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the mock-up. This paper proposes a support method to check the validity of a data life cycle by using a model checking tool “UPPAAL" focusing on CRUD (Create, Read, Update and Delete). Exhaustive checking improves the quality of requirements analysis model which are validated by the customers through automatically generated mock-up. The effectiveness of our method is discussed by a case study of requirements modeling of two small projects which are a library management system and a supportive sales system for text books in a university.Keywords: CRUD, Model Checking, Model Driven Development, Requirements Analysis, Unified Modeling Language, UPPAAL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167313244 On Methodologies for Analysing Sickness Absence Data: An Insight into a New Method
Authors: Xiaoshu Lu, Päivi Leino-Arjas, Kustaa Piha, Akseli Aittomäki, Peppiina Saastamoinen, Ossi Rahkonen, Eero Lahelma
Abstract:
Sickness absence represents a major economic and social issue. Analysis of sick leave data is a recurrent challenge to analysts because of the complexity of the data structure which is often time dependent, highly skewed and clumped at zero. Ignoring these features to make statistical inference is likely to be inefficient and misguided. Traditional approaches do not address these problems. In this study, we discuss model methodologies in terms of statistical techniques for addressing the difficulties with sick leave data. We also introduce and demonstrate a new method by performing a longitudinal assessment of long-term absenteeism using a large registration dataset as a working example available from the Helsinki Health Study for municipal employees from Finland during the period of 1990-1999. We present a comparative study on model selection and a critical analysis of the temporal trends, the occurrence and degree of long-term sickness absences among municipal employees. The strengths of this working example include the large sample size over a long follow-up period providing strong evidence in supporting of the new model. Our main goal is to propose a way to select an appropriate model and to introduce a new methodology for analysing sickness absence data as well as to demonstrate model applicability to complicated longitudinal data.Keywords: Sickness absence, longitudinal data, methodologies, mix-distribution model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 227013243 Measurement of Operational and Environmental Performance of the Coal-Fired Power Plants in India by Using Data Envelopment Analysis
Authors: Vijay Kumar Bajpai, Sudhir Kumar Singh
Abstract:
In this study, the performance analyses of the twenty five Coal-Fired Power Plants (CFPPs) used for electricity generation are carried out through various Data Envelopment Analysis (DEA) models. Three efficiency indices are defined and pursued. During the calculation of the operational performance, energy and non-energy variables are used as input, and net electricity produced is used as desired output (Model-1). CO2 emitted to the environment is used as the undesired output (Model-2) in the computation of the pure environmental performance while in Model-3 CO2 emissions is considered as detrimental input in the calculation of operational and environmental performance. Empirical results show that most of the plants are operating in increasing returns to scale region and Mettur plant is efficient one with regards to energy use and environment. The result also indicates that the undesirable output effect is insignificant in the research sample. The present study will provide clues to plant operators towards raising the operational and environmental performance of CFPPs.
Keywords: Coal fired power plants, environmental performance, data envelopment analysis, operational performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 236113242 Road Safety in Great Britain: An Exploratory Data Analysis
Authors: Jatin Kumar Choudhary, Naren Rayala, Abbas Eslami Kiasari, Fahimeh Jafari
Abstract:
Great Britain has one of the safest road networks in the world. However, the consequences of any death or serious injury are devastating for loved ones, as well as for those who help the severely injured. This paper aims to analyse Great Britain's road safety situation and show the response measures for areas where the total damage caused by accidents can be significantly and quickly reduced. For the past 30 years, the UK has had a good record in reducing fatalities over the past 30 years, there is still a considerable number of road deaths. The government continues to scale back road deaths empowering responsible road users by identifying and prosecuting the parameters that make the roads less safe. This study represents an exploratory analysis with deep insights which could provide policy makers with invaluable insights into how accidents happen and how they can be mitigated. We use STATS19 data published by the UK government. Since we need more information about locations which is not provided in STATA19, we first expand the features of the dataset using OpenStreetMap and Visual Crossing. This paper also provides a discussion regarding new road safety methods.
Keywords: Road safety, data analysis, OpenStreetMap, feature expanding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34413241 Automatic Real-Patient Medical Data De-Identification for Research Purposes
Authors: Petr Vcelak, Jana Kleckova
Abstract:
Our Medicine-oriented research is based on a medical data set of real patients. It is a security problem to share patient private data with peoples other than clinician or hospital staff. We have to remove person identification information from medical data. The medical data without private data are available after a de-identification process for any research purposes. In this paper, we introduce an universal automatic rule-based de-identification application to do all this stuff on an heterogeneous medical data. A patient private identification is replaced by an unique identification number, even in burnedin annotation in pixel data. The identical identification is used for all patient medical data, so it keeps relationships in a data. Hospital can take an advantage of a research feedback based on results.Keywords: DASTA, De-identification, DICOM, Health Level Seven, Medical data, OCR, Personal data
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 164213240 A Forecast Model for Projecting the Amount of Hazardous Waste
Authors: J. Vilgerts, L. Timma, D. Blumberga
Abstract:
The objective of the paper is to develop the forecast model for the HW flows. The methodology of the research included 6 modules: historical data, assumptions, choose of indicators, data processing, and data analysis with STATGRAPHICS, and forecast models. The proposed methodology was validated for the case study for Latvia. Hypothesis on the changes in HW for time period of 2010-2020 have been developed and mathematically described with confidence level of 95.0% and 50.0%. Sensitivity analysis for the analyzed scenarios was done. The results show that the growth of GDP affects the total amount of HW in the country. The total amount of the HW is projected to be within the corridor of – 27.7% in the optimistic scenario up to +87.8% in the pessimistic scenario with confidence level of 50.0% for period of 2010-2020. The optimistic scenario has shown to be the least flexible to the changes in the GDP growth.Keywords: Forecast models, hazardous waste management, sustainable development, waste management indicators.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185913239 Performance Analysis of the Subgroup Method for Collective I/O
Authors: Kwangho Cha, Hyeyoung Cho, Sungho Kim
Abstract:
As many scientific applications require large data processing, the importance of parallel I/O has been increasingly recognized. Collective I/O is one of the considerable features of parallel I/O and enables application programmers to easily handle their large data volume. In this paper we measured and analyzed the performance of original collective I/O and the subgroup method, the way of using collective I/O of MPI effectively. From the experimental results, we found that the subgroup method showed good performance with small data size.
Keywords: Collective I/O, MPI, parallel file system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157413238 Analysis and Classification of Hiv-1 Sub- Type Viruses by AR Model through Artificial Neural Networks
Authors: O. Yavuz, L. Ozyilmaz
Abstract:
HIV-1 genome is highly heterogeneous. Due to this variation, features of HIV-I genome is in a wide range. For this reason, the ability to infection of the virus changes depending on different chemokine receptors. From this point of view, R5 HIV viruses use CCR5 coreceptor while X4 viruses use CXCR5 and R5X4 viruses can utilize both coreceptors. Recently, in Bioinformatics, R5X4 viruses have been studied to classify by using the experiments on HIV-1 genome. In this study, R5X4 type of HIV viruses were classified using Auto Regressive (AR) model through Artificial Neural Networks (ANNs). The statistical data of R5X4, R5 and X4 viruses was analyzed by using signal processing methods and ANNs. Accessible residues of these virus sequences were obtained and modeled by AR model since the dimension of residues is large and different from each other. Finally the pre-processed data was used to evolve various ANN structures for determining R5X4 viruses. Furthermore ROC analysis was applied to ANNs to show their real performances. The results indicate that R5X4 viruses successfully classified with high sensitivity and specificity values training and testing ROC analysis for RBF, which gives the best performance among ANN structures.Keywords: Auto-Regressive Model, HIV, Neural Networks, ROC Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 117913237 Self Organizing Mixture Network in Mixture Discriminant Analysis: An Experimental Study
Authors: Nazif Çalış, Murat Erişoğlu, Hamza Erol, Tayfun Servi
Abstract:
In the recent works related with mixture discriminant analysis (MDA), expectation and maximization (EM) algorithm is used to estimate parameters of Gaussian mixtures. But, initial values of EM algorithm affect the final parameters- estimates. Also, when EM algorithm is applied two times, for the same data set, it can be give different results for the estimate of parameters and this affect the classification accuracy of MDA. Forthcoming this problem, we use Self Organizing Mixture Network (SOMN) algorithm to estimate parameters of Gaussians mixtures in MDA that SOMN is more robust when random the initial values of the parameters are used [5]. We show effectiveness of this method on popular simulated waveform datasets and real glass data set.Keywords: Self Organizing Mixture Network, MixtureDiscriminant Analysis, Waveform Datasets, Glass Identification, Mixture of Multivariate Normal Distributions
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 151613236 Application of Mutual Information based Least dependent Component Analysis (MILCA) for Removal of Ocular Artifacts from Electroencephalogram
Authors: V Krishnaveni, S Jayaraman, K Ramadoss
Abstract:
The electrical potentials generated during eye movements and blinks are one of the main sources of artifacts in Electroencephalogram (EEG) recording and can propagate much across the scalp, masking and distorting brain signals. In recent times, signal separation algorithms are used widely for removing artifacts from the observed EEG data. In this paper, a recently introduced signal separation algorithm Mutual Information based Least dependent Component Analysis (MILCA) is employed to separate ocular artifacts from EEG. The aim of MILCA is to minimize the Mutual Information (MI) between the independent components (estimated sources) under a pure rotation. Performance of this algorithm is compared with eleven popular algorithms (Infomax, Extended Infomax, Fast ICA, SOBI, TDSEP, JADE, OGWE, MS-ICA, SHIBBS, Kernel-ICA, and RADICAL) for the actual independence and uniqueness of the estimated source components obtained for different sets of EEG data with ocular artifacts by using a reliable MI Estimator. Results show that MILCA is best in separating the ocular artifacts and EEG and is recommended for further analysis.
Keywords: Electroencephalogram, Ocular Artifacts (OA), Independent Component Analysis (ICA), Mutual Information (MI), Mutual Information based Least dependent Component Analysis(MILCA)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 219213235 Novelist Calls Out Poemist: A Psycholinguistic and Contrastive Analysis of the Errors in Turkish EFL Learners- Interlanguage
Authors: Mehmet Ozcan
Abstract:
This study is designed to investigate errors emerged in written texts produced by 30 Turkish EFL learners with an explanatory, and thus, qualitative perspective. Erroneous language elements were identified by the researcher first and then their grammaticality and intelligibility were checked by five native speakers of English. The analysis of the data showed that it is difficult to claim that an error stems from only one single factor since different features of an error are triggered by different factors. Our findings revealed two different types of errors: those which stem from the interference of L1 with L2 and those which are developmental ones. The former type contains more global errors whereas the errors in latter type are more intelligible.
Keywords: Contrastive analysis, Error analysis, Language acquisition, Language transfer, Turkish
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 210613234 Blood Glucose Measurement and Analysis: Methodology
Authors: I. M. Abd Rahim, H. Abdul Rahim, R. Ghazali
Abstract:
There is numerous non-invasive blood glucose measurement technique developed by researchers, and near infrared (NIR) is the potential technique nowadays. However, there are some disagreements on the optimal wavelength range that is suitable to be used as the reference of the glucose substance in the blood. This paper focuses on the experimental data collection technique and also the analysis method used to analyze the data gained from the experiment. The selection of suitable linear and non-linear model structure is essential in prediction system, as the system developed need to be conceivably accurate.Keywords: Invasive, linear, near-infrared (Nir), non-invasive, non-linear, prediction system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 85713233 Approximate Frequent Pattern Discovery Over Data Stream
Authors: Kittisak Kerdprasop, Nittaya Kerdprasop
Abstract:
Frequent pattern discovery over data stream is a hard problem because a continuously generated nature of stream does not allow a revisit on each data element. Furthermore, pattern discovery process must be fast to produce timely results. Based on these requirements, we propose an approximate approach to tackle the problem of discovering frequent patterns over continuous stream. Our approximation algorithm is intended to be applied to process a stream prior to the pattern discovery process. The results of approximate frequent pattern discovery have been reported in the paper.Keywords: Frequent pattern discovery, Approximate algorithm, Data stream analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 134113232 Analyzing Methods of the Relation between Concepts based on a Concept Hierarchy
Authors: Ke Lu, Tetsuya Furukawa
Abstract:
Data objects are usually organized hierarchically, and the relations between them are analyzed based on a corresponding concept hierarchy. The relation between data objects, for example how similar they are, are usually analyzed based on the conceptual distance in the hierarchy. If a node is an ancestor of another node, it is enough to analyze how close they are by calculating the distance vertically. However, if there is not such relation between two nodes, the vertical distance cannot express their relation explicitly. This paper tries to fill this gap by improving the analysis method for data objects based on hierarchy. The contributions of this paper include: (1) proposing an improved method to evaluate the vertical distance between concepts; (2) defining the concept horizontal distance and a method to calculate the horizontal distance; and (3) discussing the methods to confine a range by the horizontal distance and the vertical distance, and evaluating the relation between concepts.Keywords: Concept Hierarchy, Horizontal Distance, Relation Analysis, Vertical Distance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 122613231 Computer Software Applicable in Rehabilitation, Cardiology and Molecular Biology
Authors: P. Kowalska, P. Gabka, K. Kamieniarz, M. Kamieniarz, W. Stryla, P. Guzik, T. Krauze
Abstract:
We have developed a computer program consisting of 6 subtests assessing the children hand dexterity applicable in the rehabilitation medicine. We have carried out a normative study on a representative sample of 285 children aged from 7 to 15 (mean age 11.3) and we have proposed clinical standards for three age groups (7-9, 9-11, 12-15 years). We have shown statistical significance of differences among the corresponding mean values of the task time completion. We have also found a strong correlation between the task time completion and the age of the subjects, as well as we have performed the test-retest reliability checks in the sample of 84 children, giving the high values of the Pearson coefficients for the dominant and non-dominant hand in the range 0.74Keywords: Biomedical data base processing, Computer software, Hand dexterity, Heart rate and blood pressure variability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 147413230 Automatic Camera Calibration for Images of Soccer Match
Abstract:
Camera calibration plays an important role in the domain of the analysis of sports video. Considering soccer video, in most cases, the cross-points can be used for calibration at the center of the soccer field are not sufficient, so this paper introduces a new automatic camera calibration algorithm focus on solving this problem by using the properties of images of the center circle, halfway line and a touch line. After the theoretical analysis, a practicable automatic algorithm is proposed. Very little information used though, results of experiments with both synthetic data and real data show that the algorithm is applicable.
Keywords: Absolute conic, camera calibration, circular points, line at infinity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 236513229 Utilizing the Principal Component Analysis on Multispectral Aerial Imagery for Identification of Underlying Structures
Authors: M. Bosques-Perez, W. Izquierdo, H. Martin, L. Deng, J. Rodriguez, T. Yan, M. Cabrerizo, A. Barreto, N. Rishe, M. Adjouadi
Abstract:
Aerial imagery is a powerful tool when it comes to analyzing temporal changes in ecosystems and extracting valuable information from the observed scene. It allows us to identify and assess various elements such as objects, structures, textures, waterways, and shadows. To extract meaningful information, multispectral cameras capture data across different wavelength bands of the electromagnetic spectrum. In this study, the collected multispectral aerial images were subjected to principal component analysis (PCA) to identify independent and uncorrelated components or features that extend beyond the visible spectrum captured in standard RGB images. The results demonstrate that these principal components contain unique characteristics specific to certain wavebands, enabling effective object identification and image segmentation.
Keywords: Big data, image processing, multispectral, principal component analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9413228 Numerical Analysis of the SIR-SI Differential Equations with Application to Dengue Disease Mapping in Kuala Lumpur, Malaysia
Authors: N. A. Samat, D. F. Percy
Abstract:
The main aim of this study is to describe and introduce a method of numerical analysis in obtaining approximate solutions for the SIR-SI differential equations (susceptible-infectiverecovered for human populations; susceptible-infective for vector populations) that represent a model for dengue disease transmission. Firstly, we describe the ordinary differential equations for the SIR-SI disease transmission models. Then, we introduce the numerical analysis of solutions of this continuous time, discrete space SIR-SI model by simplifying the continuous time scale to a densely populated, discrete time scale. This is followed by the application of this numerical analysis of solutions of the SIR-SI differential equations to the estimation of relative risk using continuous time, discrete space dengue data of Kuala Lumpur, Malaysia. Finally, we present the results of the analysis, comparing and displaying the results in graphs, table and maps. Results of the numerical analysis of solutions that we implemented offers a useful and potentially superior model for estimating relative risks based on continuous time, discrete space data for vector borne infectious diseases specifically for dengue disease.
Keywords: Dengue disease, disease mapping, numerical analysis, SIR-SI differential equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 268613227 Risk Factors’ Analysis on Shanghai Carbon Trading
Authors: Zhaojun Wang, Zongdi Sun, Zhiyuan Liu
Abstract:
First of all, the carbon trading price and trading volume in Shanghai are transformed by Fourier transform, and the frequency response diagram is obtained. Then, the frequency response diagram is analyzed and the Blackman filter is designed. The Blackman filter is used to filter, and the carbon trading time domain and frequency response diagram are obtained. After wavelet analysis, the carbon trading data were processed; respectively, we got the average value for each 5 days, 10 days, 20 days, 30 days, and 60 days. Finally, the data are used as input of the Back Propagation Neural Network model for prediction.
Keywords: Shanghai carbon trading, carbon trading price, carbon trading volume, wavelet analysis, BP neural network model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 97313226 Validation and Selection between Machine Learning Technique and Traditional Methods to Reduce Bullwhip Effects: a Data Mining Approach
Authors: Hamid R. S. Mojaveri, Seyed S. Mousavi, Mojtaba Heydar, Ahmad Aminian
Abstract:
The aim of this paper is to present a methodology in three steps to forecast supply chain demand. In first step, various data mining techniques are applied in order to prepare data for entering into forecasting models. In second step, the modeling step, an artificial neural network and support vector machine is presented after defining Mean Absolute Percentage Error index for measuring error. The structure of artificial neural network is selected based on previous researchers' results and in this article the accuracy of network is increased by using sensitivity analysis. The best forecast for classical forecasting methods (Moving Average, Exponential Smoothing, and Exponential Smoothing with Trend) is resulted based on prepared data and this forecast is compared with result of support vector machine and proposed artificial neural network. The results show that artificial neural network can forecast more precisely in comparison with other methods. Finally, forecasting methods' stability is analyzed by using raw data and even the effectiveness of clustering analysis is measured.Keywords: Artificial Neural Networks (ANN), bullwhip effect, demand forecasting, Support Vector Machine (SVM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200913225 Data Envelopment Analysis with Partially Perfect Objects
Authors: Alexander Y. Vaninsky
Abstract:
This paper presents a simplified version of Data Envelopment Analysis (DEA) - a conventional approach to evaluating the performance and ranking of competitive objects characterized by two groups of factors acting in opposite directions: inputs and outputs. DEA with a Perfect Object (DEA PO) augments the group of actual objects with a virtual Perfect Object - the one having greatest outputs and smallest inputs. It allows for obtaining an explicit analytical solution and making a step to an absolute efficiency. This paper develops this approach further and introduces a DEA model with Partially Perfect Objects. DEA PPO consecutively eliminates the smallest relative inputs or greatest relative outputs, and applies DEA PO to the reduced collections of indicators. The partial efficiency scores are combined to get the weighted efficiency score. The computational scheme remains simple, like that of DEA PO, but the advantage of the DEA PPO is taking into account all of the inputs and outputs for each actual object. Firm evaluation is considered as an example.
Keywords: Data Envelopment Analysis, Perfect object, Partially perfect object, Partial efficiency, Explicit solution, Simplified algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 169713224 Nonlinear Multivariable Analysis of CO2 Emissions in China
Authors: Hsiao-Tien Pao, Yi-Ying Li, Hsin-Chia Fu
Abstract:
This paper addressed the impacts of energy consumption, economic growth, financial development, and population size on environmental degradation using grey relational analysis (GRA) for China, where foreign direct investment (FDI) inflows is the proxy variable for financial development. The more recent historical data during the period 2004–2011 are used, because the use of very old data for data analysis may not be suitable for rapidly developing countries. The results of the GRA indicate that the linkage effects of energy consumption–emissions and GDP–emissions are ranked first and second, respectively. These reveal that energy consumption and economic growth are strongly correlated with emissions. Higher economic growth requires more energy consumption and increasing environmental pollution. Likewise, more efficient energy use needs a higher level of economic development. Therefore, policies to improve energy efficiency and create a low-carbon economy can reduce emissions without hurting economic growth. The finding of FDI–emissions linkage is ranked third. This indicates that China do not apply weak environmental regulations to attract inward FDI. Furthermore, China’s government in attracting inward FDI should strengthen environmental policy. The finding of population–emissions linkage effect is ranked fourth, implying that population size does not directly affect CO2 emissions, even though China has the world’s largest population, and Chinese people are very economical use of energy-related products. Overall, the energy conservation, improving efficiency, managing demand, and financial development, which aim at curtailing waste of energy, reducing both energy consumption and emissions, and without loss of the country’s competitiveness, can be adopted for developing economies. The GRA is one of the best way to use a lower data to build a dynamic analysis model.
Keywords: Grey relational analysis, foreign direct investment, CO2 emissions, China.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 127513223 Simulation Data Management Approach for Developing Adaptronic Systems – The W-Model Methodology
Authors: Roland S. Nattermann, Reiner Anderl
Abstract:
Existing proceeding-models for the development of mechatronic systems provide a largely parallel action in the detailed development. This parallel approach is to take place also largely independent of one another in the various disciplines involved. An approach for a new proceeding-model provides a further development of existing models to use for the development of Adaptronic Systems. This approach is based on an intermediate integration and an abstract modeling of the adaptronic system. Based on this system-model a simulation of the global system behavior, due to external and internal factors or Forces is developed. For the intermediate integration a special data management system is used. According to the presented approach this data management system has a number of functions that are not part of the "normal" PDM functionality. Therefore a concept for a new data management system for the development of Adaptive system is presented in this paper. This concept divides the functions into six layers. In the first layer a system model is created, which divides the adaptronic system based on its components and the various technical disciplines. Moreover, the parameters and properties of the system are modeled and linked together with the requirements and the system model. The modeled parameters and properties result in a network which is analyzed in the second layer. From this analysis necessary adjustments to individual components for specific manipulation of the system behavior can be determined. The third layer contains an automatic abstract simulation of the system behavior. This simulation is a precursor for network analysis and serves as a filter. By the network analysis and simulation changes to system components are examined and necessary adjustments to other components are calculated. The other layers of the concept treat the automatic calculation of system reliability, the "normal" PDM-functionality and the integration of discipline-specific data into the system model. A prototypical implementation of an appropriate data management with the addition of an automatic system development is being implemented using the data management system ENOVIA SmarTeam V5 and the simulation system MATLAB.
Keywords: Adaptronic, Data-Management, LOEWE-CentreAdRIA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 236713222 Multivariate School Travel Demand Regression Based on Trip Attraction
Authors: Ben-Edigbe J, RahmanR
Abstract:
Since primary school trips usually start from home, attention by many scholars have been focused on the home end for data gathering. Thereafter category analysis has often been relied upon when predicting school travel demands. In this paper, school end was relied on for data gathering and multivariate regression for future travel demand prediction. 9859 pupils were surveyed by way of questionnaires at 21 primary schools. The town was divided into 5 zones. The study was carried out in Skudai Town, Malaysia. Based on the hypothesis that the number of primary school trip ends are expected to be the same because school trips are fixed, the choice of trip end would have inconsequential effect on the outcome. The study compared empirical data for home and school trip end productions and attractions. Variance from both data results was insignificant, although some claims from home based family survey were found to be grossly exaggerated. Data from the school trip ends was relied on for travel demand prediction because of its completeness. Accessibility, trip attraction and trip production were then related to school trip rates under daylight and dry weather conditions. The paper concluded that, accessibility is an important parameter when predicting demand for future school trip rates.Keywords: Trip generation, regression analysis, multiple linearregressions
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190513221 Empirical Process Monitoring Via Chemometric Analysis of Partially Unbalanced Data
Authors: Hyun-Woo Cho
Abstract:
Real-time or in-line process monitoring frameworks are designed to give early warnings for a fault along with meaningful identification of its assignable causes. In artificial intelligence and machine learning fields of pattern recognition various promising approaches have been proposed such as kernel-based nonlinear machine learning techniques. This work presents a kernel-based empirical monitoring scheme for batch type production processes with small sample size problem of partially unbalanced data. Measurement data of normal operations are easy to collect whilst special events or faults data are difficult to collect. In such situations, noise filtering techniques can be helpful in enhancing process monitoring performance. Furthermore, preprocessing of raw process data is used to get rid of unwanted variation of data. The performance of the monitoring scheme was demonstrated using three-dimensional batch data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.
Keywords: Process Monitoring, kernel methods, multivariate filtering, data-driven techniques, quality improvement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174613220 A Comparative Analysis of Different Web Content Mining Tools
Authors: T. Suresh Kumar, M. Arthanari, N. Shanthi
Abstract:
Nowadays, the Web has become one of the most pervasive platforms for information change and retrieval. It collects the suitable and perfectly fitting information from websites that one requires. Data mining is the form of extracting data’s available in the internet. Web mining is one of the elements of data mining Technique, which relates to various research communities such as information recovery, folder managing system and simulated intellects. In this Paper we have discussed the concepts of Web mining. We contain generally focused on one of the categories of Web mining, specifically the Web Content Mining and its various farm duties. The mining tools are imperative to scanning the many images, text, and HTML documents and then, the result is used by the various search engines. We conclude by presenting a comparative table of these tools based on some pertinent criteria.
Keywords: Data Mining, Web Mining, Web Content Mining, Mining Tools, Information retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 355213219 Aircraft Selection Using Multiple Criteria Decision Making Analysis Method with Different Data Normalization Techniques
Authors: C. Ardil
Abstract:
This paper presents an original application of multiple criteria decision making analysis theory to the evaluation of aircraft selection problem. The selection of an optimal, efficient and reliable fleet, network and operations planning policy is one of the most important factors in aircraft selection problem. Given that decision making in aircraft selection involves the consideration of a number of opposite criteria and possible solutions, such a selection can be considered as a multiple criteria decision making analysis problem. This study presents a new integrated approach to decision making by considering the multiple criteria utility theory and the maximal regret minimization theory methods as well as aircraft technical, economical, and environmental aspects. Multiple criteria decision making analysis method uses different normalization techniques to allow criteria to be aggregated with qualitative and quantitative data of the decision problem. Therefore, selecting a suitable normalization technique for the model is also a challenge to provide data aggregation for the aircraft selection problem. To compare the impact of different normalization techniques on the decision problem, the vector, linear (sum), linear (max), and linear (max-min) data normalization techniques were identified to evaluate aircraft selection problem. As a logical implication of the proposed approach, it enhances the decision making process through enabling the decision maker to: (i) use higher level knowledge regarding the selection of criteria weights and the proposed technique, (ii) estimate the ranking of an alternative, under different data normalization techniques and integrated criteria weights after a posteriori analysis of the final rankings of alternatives. A set of commercial passenger aircraft were considered in order to illustrate the proposed approach. The obtained results of the proposed approach were compared using Spearman's rho tests. An analysis of the final rank stability with respect to the changes in criteria weights was also performed so as to assess the sensitivity of the alternative rankings obtained by the application of different data normalization techniques and the proposed approach.
Keywords: Normalization Techniques, Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 586