Search results for: mathematical data analysis.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14053

Search results for: mathematical data analysis.

13543 The Resource Description Framework (RDF) as a Modern Structure for Medical Data

Authors: Gabriela Lindemann, Danilo Schmidt, Thomas Schrader, Dietmar Keune

Abstract:

The amount and heterogeneity of data in biomedical research, notably in interdisciplinary fields, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available but come out of distributed resources. The Charité - University Hospital Berlin has established together with the German Research Foundation (DFG) a new information service centre for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC). Beside a collaborative aspect to create new research groups every single partner or institution of this science information centre making his own data available is allowed to search the whole data pool of the various involved centres. A core task is the implementation of a non-restricting open data structure for the various different data sources. We decided to use a modern RDF model and in a first phase transformed original data coming from the web-based Electronic Patient Record database TBase©.

Keywords: Medical databases, Resource Description Framework (RDF), metadata repository.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2031
13542 Design of a Statistics Lecture for Multidisciplinary Postgraduate Students Using a Range of Tools and Techniques

Authors: S. Assi, M. Haffar

Abstract:

Teaching statistics is a critical and challenging issue especially to students from multidisciplinary and diverse postgraduate backgrounds. Postgraduate research students require statistics not only for the design of experiments; but also for data analysis. Students often perceive statistics as a complex and technical subject; thus, they leave data analysis to the last moment. The lecture needs to be simple and inclusive at the same time to make it comprehendible and address the learning needs of each student. Therefore, the aim of this work was to design a simple and comprehendible statistics lecture to postgraduate research students regarding ‘Research plan, design and data collection’. The lecture adopted the constructive alignment learning theory which facilitated the learning environments for the students. The learning environment utilized a student-centered approach and used interactive learning environment with in-class discussion, handouts and electronic voting system handsets. For evaluation of the lecture, formative assessment was made with in-class discussions and poll questions which were introduced during and after the lecture. The whole approach showed to be effective in creating a learning environment to the students who were able to apply the concepts addressed to their individual research projects.

Keywords: Teaching, statistics, lecture, multidisciplinary, postgraduate, learning theory, learning environment, student-centered approach, data analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1135
13541 Renewable Energy System Eolic-Photovoltaic for the Touristic Center La Tranca-Chordeleg in Ecuador

Authors: Christian Castro Samaniego, Daniel Icaza Alvarez, Juan Portoviejo Brito

Abstract:

For this research work, hybrid wind-photovoltaic (SHEF) systems were considered as renewable energy sources that take advantage of wind energy and solar radiation to transform into electrical energy. In the present research work, the feasibility of a wind-photovoltaic hybrid generation system was analyzed for the La Tranca tourist viewpoint of the Chordeleg canton in Ecuador. The research process consisted of the collection of data on solar radiation, temperature, wind speed among others by means of a meteorological station. Simulations were carried out in MATLAB/Simulink based on a mathematical model. In the end, we compared the theoretical radiation-power curves and the measurements made at the site.

Keywords: Hybrid system, wind turbine, modeling, simulation, validation, experimental data, panel, Ecuador.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 744
13540 Learning Human-Like Color Categorization through Interaction

Authors: Rinaldo Christian Tanumara, Ming Xie, Chi Kit Au

Abstract:

Human perceives color in categories, which may be identified using color name such as red, blue, etc. The categorization is unique for each human being. However despite the individual differences, the categorization is shared among members in society. This allows communication among them, especially when using color name. Sociable robot, to live coexist with human and become part of human society, must also have the shared color categorization, which can be achieved through learning. Many works have been done to enable computer, as brain of robot, to learn color categorization. Most of them rely on modeling of human color perception and mathematical complexities. Differently, in this work, the computer learns color categorization through interaction with humans. This work aims at developing the innate ability of the computer to learn the human-like color categorization. It focuses on the representation of color categorization and how it is built and developed without much mathematical complexity.

Keywords: Color categorization, color learning, machinelearning, color naming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529
13539 Implementation of Intuitionistic Fuzzy Approach in Maximizing Net Present Value

Authors: Gaurav Kumar, Rakesh Kumar Bajaj

Abstract:

The applicability of Net Present Value (NPV) in an investment project is becoming more and more popular in the field of engineering economics. The classical NPV methodology involves only the precise and accurate data of the investment project. In the present communication, we give a new mathematical model for NPV which uses the concept of intuitionistic fuzzy set theory. The proposed model is based on triangular intuitionistic fuzzy number, which may be known as Intuitionistic Fuzzy Net Present Value (IFNPV). The model has been applied to an example and the results are presented.

Keywords: Net Present Value, Intuitionistic Fuzzy Set, Investment Projects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2514
13538 The Comparison of Data Replication in Distributed Systems

Authors: Iman Zangeneh, Mostafa Moradi, Ali Mokhtarbaf

Abstract:

The necessity of ever-increasing use of distributed data in computer networks is obvious for all. One technique that is performed on the distributed data for increasing of efficiency and reliablity is data rplication. In this paper, after introducing this technique and its advantages, we will examine some dynamic data replication. We will examine their characteristies for some overus scenario and the we will propose some suggestion for their improvement.

Keywords: data replication, data hiding, consistency, dynamicdata replication strategy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
13537 Knowledge Discovery and Data Mining Techniques in Textile Industry

Authors: Filiz Ersoz, Taner Ersoz, Erkin Guler

Abstract:

This paper addresses the issues and technique for textile industry using data mining techniques. Data mining has been applied to the stitching of garments products that were obtained from a textile company. Data mining techniques were applied to the data obtained from the CHAID algorithm, CART algorithm, Regression Analysis and, Artificial Neural Networks. Classification technique based analyses were used while data mining and decision model about the production per person and variables affecting about production were found by this method. In the study, the results show that as the daily working time increases, the production per person also decreases. In addition, the relationship between total daily working and production per person shows a negative result and the production per person show the highest and negative relationship.

Keywords: Data mining, textile production, decision trees, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
13536 Optimal Diesel Engine Technology Analysis Matching the Platform of the Helicopter

Authors: M. Wendeker, K. Siadkowska, P. Magryta, Z. Czyz, K. Skiba

Abstract:

In the paper environmental impact analysis the optimal Diesel engine for a light helicopter was performed. The paper consist an answer to the question of what the optimal Diesel engine for a light helicopter is, taking into consideration its expected performance and design capacity. The use of turbocharged engine with self-ignition and an electronic control system can substantially reduce the negative impact on the environment by decreasing toxic substance emission, fuel consumption and therefore carbon dioxide emission. In order to establish the environmental benefits of the diesel engine technologies, mathematical models were created, providing additional insight on the environmental impact and performance of a classic turboshaft and an advanced diesel engine light helicopter, incorporating technology developments.

Keywords: Diesel engine, helicopter, simulation, environmental impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2247
13535 A Spatial Point Pattern Analysis to Recognize Fail Bit Patterns in Semiconductor Manufacturing

Authors: Youngji Yoo, Seung Hwan Park, Daewoong An, Sung-Shick Kim, Jun-Geol Baek

Abstract:

The yield management system is very important to produce high-quality semiconductor chips in the semiconductor manufacturing process. In order to improve quality of semiconductors, various tests are conducted in the post fabrication (FAB) process. During the test process, large amount of data are collected and the data includes a lot of information about defect. In general, the defect on the wafer is the main causes of yield loss. Therefore, analyzing the defect data is necessary to improve performance of yield prediction. The wafer bin map (WBM) is one of the data collected in the test process and includes defect information such as the fail bit patterns. The fail bit has characteristics of spatial point patterns. Therefore, this paper proposes the feature extraction method using the spatial point pattern analysis. Actual data obtained from the semiconductor process is used for experiments and the experimental result shows that the proposed method is more accurately recognize the fail bit patterns.

Keywords: Semiconductor, wafer bin map (WBM), feature extraction, spatial point patterns, contour map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2500
13534 Saudi Twitter Corpus for Sentiment Analysis

Authors: Adel Assiri, Ahmed Emam, Hmood Al-Dossari

Abstract:

Sentiment analysis (SA) has received growing attention in Arabic language research. However, few studies have yet to directly apply SA to Arabic due to lack of a publicly available dataset for this language. This paper partially bridges this gap due to its focus on one of the Arabic dialects which is the Saudi dialect. This paper presents annotated data set of 4700 for Saudi dialect sentiment analysis with (K= 0.807). Our next work is to extend this corpus and creation a large-scale lexicon for Saudi dialect from the corpus.

Keywords: Arabic, Sentiment Analysis, Twitter, annotation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4044
13533 Swine Flu Transmission Model in Risk and Non-Risk Human Population

Authors: P. Pongsumpun

Abstract:

The Swine flu outbreak in humans is due to a new strain of influenza A virus subtype H1N1 that derives in part from human influenza, avian influenza, and two separated strains of swine influenza. It can be transmitted from human to human. A mathematical model for the transmission of Swine flu is developed in which the human populations are divided into two classes, the risk and non-risk human classes. Each class is separated into susceptible, exposed, infectious, quarantine and recovered sub-classes. In this paper, we formulate the dynamical model of Swine flu transmission and the repetitive contacts between the people are also considered. We analyze the behavior for the transmission of this disease. The Threshold condition of this disease is found and numerical results are shown to confirm our theoretical predictions.

Keywords: Mathematical model, Steady state, Swine flu, threshold condition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1312
13532 Real Time Force Sensing Mat for Human Gait Analysis

Authors: Darwin Gouwanda, S. M. N. Arosha Senanayake, M. M. Danushka Ranjana Marasinghe, Mervin Chandrapal, Jeya Mithra Kumar, Tung Mun Hon, Yulius

Abstract:

This paper presents a real time force sensing instrument that is designed for human gait analysis purposes. This instrument mainly consists of three main elements: the force sensing mat, signal conditioning and switching circuit and data acquisition device. In order to control and to process the incoming signals from the force sensing mat, Force-Logger and Force-Reloader program are developed using Labview 8.0. This paper describes the architecture of the force sensing mat, signal conditioning and switching circuit and the real time streaming of the incoming data from the force sensing mat.

Keywords: Force platform, Force sensing resistor, human gait analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2482
13531 A Study on a Research and Development Cost-Estimation Model in Korea

Authors: Babakina Alexandra, Yong Soo Kim

Abstract:

In this study, we analyzed the factors that affect research funds using linear regression analysis to increase the effectiveness of investments in national research projects. We collected 7,916 items of data on research projects that were in the process of being finished or were completed between 2010 and 2011. Data pre-processing and visualization were performed to derive statistically significant results. We identified factors that affected funding using analysis of fit distributions and estimated increasing or decreasing tendencies based on these factors.

Keywords: R&D funding, Cost estimation, Linear regression, Preliminary feasibility study.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2246
13530 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: Big data, bus headway prediction, machine learning, public transportation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562
13529 Explorative Data Mining of Constructivist Learning Experiences and Activities with Multiple Dimensions

Authors: Patrick Wessa, Bart Baesens

Abstract:

This paper discusses the use of explorative data mining tools that allow the educator to explore new relationships between reported learning experiences and actual activities, even if there are multiple dimensions with a large number of measured items. The underlying technology is based on the so-called Compendium Platform for Reproducible Computing (http://www.freestatistics.org) which was built on top the computational R Framework (http://www.wessa.net).

Keywords: Reproducible computing, data mining, explorative data analysis, compendium technology, computer assisted education

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1253
13528 Mobile Phone as a Tool for Data Collection in Field Research

Authors: Sandro Mourão, Karla Okada

Abstract:

The necessity of accurate and timely field data is shared among organizations engaged in fundamentally different activities, public services or commercial operations. Basically, there are three major components in the process of the qualitative research: data collection, interpretation and organization of data, and analytic process. Representative technological advancements in terms of innovation have been made in mobile devices (mobile phone, PDA-s, tablets, laptops, etc). Resources that can be potentially applied on the data collection activity for field researches in order to improve this process. This paper presents and discuss the main features of a mobile phone based solution for field data collection, composed of basically three modules: a survey editor, a server web application and a client mobile application. The data gathering process begins with the survey creation module, which enables the production of tailored questionnaires. The field workforce receives the questionnaire(s) on their mobile phones to collect the interviews responses and sending them back to a server for immediate analysis.

Keywords: Data Gathering, Field Research, Mobile Phone, Survey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2058
13527 Improvement in Power Transformer Intelligent Dissolved Gas Analysis Method

Authors: S. Qaedi, S. Seyedtabaii

Abstract:

Non-Destructive evaluation of in-service power transformer condition is necessary for avoiding catastrophic failures. Dissolved Gas Analysis (DGA) is one of the important methods. Traditional, statistical and intelligent DGA approaches have been adopted for accurate classification of incipient fault sources. Unfortunately, there are not often enough faulty patterns required for sufficient training of intelligent systems. By bootstrapping the shortcoming is expected to be alleviated and algorithms with better classification success rates to be obtained. In this paper the performance of an artificial neural network, K-Nearest Neighbour and support vector machine methods using bootstrapped data are detailed and shown that while the success rate of the ANN algorithms improves remarkably, the outcome of the others do not benefit so much from the provided enlarged data space. For assessment, two databases are employed: IEC TC10 and a dataset collected from reported data in papers. High average test success rate well exhibits the remarkable outcome.

Keywords: Dissolved gas analysis, Transformer incipient fault, Artificial Neural Network, Support Vector Machine (SVM), KNearest Neighbor (KNN)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2739
13526 Cross Project Software Fault Prediction at Design Phase

Authors: Pradeep Singh, Shrish Verma

Abstract:

Software fault prediction models are created by using the source code, processed metrics from the same or previous version of code and related fault data. Some company do not store and keep track of all artifacts which are required for software fault prediction. To construct fault prediction model for such company, the training data from the other projects can be one potential solution. Earlier we predicted the fault the less cost it requires to correct. The training data consists of metrics data and related fault data at function/module level. This paper investigates fault predictions at early stage using the cross-project data focusing on the design metrics. In this study, empirical analysis is carried out to validate design metrics for cross project fault prediction. The machine learning techniques used for evaluation is Naïve Bayes. The design phase metrics of other projects can be used as initial guideline for the projects where no previous fault data is available. We analyze seven datasets from NASA Metrics Data Program which offer design as well as code metrics. Overall, the results of cross project is comparable to the within company data learning.

Keywords: Software Metrics, Fault prediction, Cross project, Within project.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2546
13525 Dynamic Analysis of Nonlinear Models with Infinite Extension by Boundary Elements

Authors: Delfim Soares Jr., Webe J. Mansur

Abstract:

The Time-Domain Boundary Element Method (TDBEM) is a well known numerical technique that handles quite properly dynamic analyses considering infinite dimension media. However, when these analyses are also related to nonlinear behavior, very complex numerical procedures arise considering the TD-BEM, which may turn its application prohibitive. In order to avoid this drawback and model nonlinear infinite media, the present work couples two BEM formulations, aiming to achieve the best of two worlds. In this context, the regions expected to behave nonlinearly are discretized by the Domain Boundary Element Method (D-BEM), which has a simpler mathematical formulation but is unable to deal with infinite domain analyses; the TD-BEM is employed as in the sense of an effective non-reflexive boundary. An iterative procedure is considered for the coupling of the TD-BEM and D-BEM, which is based on a relaxed renew of the variables at the common interfaces. Elastoplastic models are focused and different time-steps are allowed to be considered by each BEM formulation in the coupled analysis.

Keywords: Boundary Element Method, Dynamic Elastoplastic Analysis, Iterative Coupling, Multiple Time-Steps.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
13524 Estimating the Runoff Using the Simple Tank Model and Comparing it with the SCS-CN Model - A Case Study of the Dez River Basin

Authors: H. Alaleh, N. Hedayat, A. Alaleh, H. Ayazi, A. Ruhani

Abstract:

Run-offs are considered as important hydrological factors in feasibility studies of river engineering and irrigation-related projects under arid and semi-arid condition. Flood control is one of the crucial factor, the management of which while mitigates its destructive consequences, abstracts considerable volume of renewable water resources. The methodology applied here was based on Mizumura, which applied a mathematical model for simple tank to simulate the rainfall-run-off process in a particular water basin using the data from the observational hydrograph. The model was applied in the Dez River water basin adjacent to Greater Dezful region, Iran in order to simulate and estimate the floods. Results indicated that the calculated hydrographs using the simple tank method, SCS-CN model and the observation hydrographs had a close proximity. It was also found that on average the flood time and discharge peaks in the simple tank were closer to the observational data than the CN method. On the other hand, the calculated flood volume in the CN model was significantly closer to the observational data than the simple tank model.

Keywords: Simple tank, Dez River, run-off, lag time, excess rainfall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2593
13523 Quad Tree Decomposition Based Analysis of Compressed Image Data Communication for Lossy and Lossless Using WSN

Authors: N. Muthukumaran, R. Ravi

Abstract:

The Quad Tree Decomposition based performance analysis of compressed image data communication for lossy and lossless through wireless sensor network is presented. Images have considerably higher storage requirement than text. While transmitting a multimedia content there is chance of the packets being dropped due to noise and interference. At the receiver end the packets that carry valuable information might be damaged or lost due to noise, interference and congestion. In order to avoid the valuable information from being dropped various retransmission schemes have been proposed. In this proposed scheme QTD is used. QTD is an image segmentation method that divides the image into homogeneous areas. In this proposed scheme involves analysis of parameters such as compression ratio, peak signal to noise ratio, mean square error, bits per pixel in compressed image and analysis of difficulties during data packet communication in Wireless Sensor Networks. By considering the above, this paper is to use the QTD to improve the compression ratio as well as visual quality and the algorithm in MATLAB 7.1 and NS2 Simulator software tool.

Keywords: Image compression, Compression Ratio, Quad tree decomposition, Wireless sensor networks, NS2 simulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2391
13522 Futures Trading: Design of a Strategy

Authors: Jan Zeman

Abstract:

The paper describes the futures trading and aims to design the speculators trading strategy. The problem is formulated as the decision making task and such as is solved. The solution of the task leads to complex mathematical problems and the approximations of the decision making is demanded. Two kind of approximation are used in the paper: Monte Carlo for the multi-step prediction and iteration spread in time for the optimization. The solution is applied to the real-market data and the results of the off-line experiments are presented.

Keywords: futures trading, decision making

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1124
13521 Optimum Stratification of a Skewed Population

Authors: D.K. Rao, M.G.M. Khan, K.G. Reddy

Abstract:

The focus of this paper is to develop a technique of solving a combined problem of determining Optimum Strata Boundaries(OSB) and Optimum Sample Size (OSS) of each stratum, when the population understudy isskewed and the study variable has a Pareto frequency distribution. The problem of determining the OSB isformulated as a Mathematical Programming Problem (MPP) which is then solved by dynamic programming technique. A numerical example is presented to illustrate the computational details of the proposed method. The proposed technique is useful to obtain OSB and OSS for a Pareto type skewed population, which minimizes the variance of the estimate of population mean.

Keywords: Stratified sampling, Optimum strata boundaries, Optimum sample size, Pareto distribution, Mathematical programming problem, Dynamic programming technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4058
13520 Investigation of Learning Challenges in Building Measurement Unit

Authors: Argaw T. Gurmu, Muhammad N. Mahmood

Abstract:

The objective of this research is to identify the architecture and construction management students’ learning challenges of the building measurement. This research used the survey data obtained collected from the students who completed the building measurement unit. NVivo qualitative data analysis software was used to identify relevant themes. The analysis of the qualitative data revealed the major learning difficulties such as inadequacy of practice questions for the examination, inability to work as a team, lack of detailed understanding of the prerequisite units, insufficiency of the time allocated for tutorials and incompatibility of lecture and tutorial schedules. The output of this research can be used as a basis for improving the teaching and learning activities in construction measurement units.

Keywords: Building measurement, construction management, learning challenges, evaluate survey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1104
13519 Design and Simulation of Air-Fuel Ratio Control System for Distributorless CNG Engine

Authors: Ei Ei Moe, Zaw Min Aung, Kyawt Khin

Abstract:

This paper puts forward one kind of air-fuel ratio control method with PI controller. With the help of MATLAB/SIMULINK software, the mathematical model of air-fuel ratio control system for distributorless CNG engine is constructed. The objective is to maintain cylinder-to-cylinder air-fuel ratio at a prescribed set point, determined primarily by the state of the Three- Way-Catalyst (TWC), so that the pollutants in the exhaust are removed with the highest efficiency. The concurrent control of airfuel under transient conditions could be implemented by Proportional and Integral (PI) controller. The simulation result indicates that the control methods can easily eliminate the air/fuel maldistribution and maintain the air/fuel ratio at the stochiometry within minimum engine events.

Keywords: Distributorless CNG Engine, Mathematical Modelof Air-fuel control, MATLAB/SIMULINK, PI controller

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4490
13518 A Hybrid DEA Model for the Measurement of the Enviromental Performance

Authors: A. Hadi-Vencheh, N. Shayesteh Moghadam

Abstract:

Data envelopment analysis (DEA) has gained great popularity in environmental performance measurement because it can provide a synthetic standardized environmental performance index when pollutants are suitably incorporated into the traditional DEA framework. Since some of the environmental performance indicators cannot be controlled by companies managers, it is necessary to develop the model in a way that it could be applied when discretionary and/or non-discretionary factors were involved. In this paper, we present a semi-radial DEA approach to measuring environmental performance, which consists of non-discretionary factors. The model, then, has been applied on a real case.

Keywords: Environmental performance, efficiency, non-discretionary variables, data envelopment analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1377
13517 A Survey of Sentiment Analysis Based on Deep Learning

Authors: Pingping Lin, Xudong Luo, Yifan Fan

Abstract:

Sentiment analysis is a very active research topic. Every day, Facebook, Twitter, Weibo, and other social media, as well as significant e-commerce websites, generate a massive amount of comments, which can be used to analyse peoples opinions or emotions. The existing methods for sentiment analysis are based mainly on sentiment dictionaries, machine learning, and deep learning. The first two kinds of methods rely on heavily sentiment dictionaries or large amounts of labelled data. The third one overcomes these two problems. So, in this paper, we focus on the third one. Specifically, we survey various sentiment analysis methods based on convolutional neural network, recurrent neural network, long short-term memory, deep neural network, deep belief network, and memory network. We compare their futures, advantages, and disadvantages. Also, we point out the main problems of these methods, which may be worthy of careful studies in the future. Finally, we also examine the application of deep learning in multimodal sentiment analysis and aspect-level sentiment analysis.

Keywords: Natural language processing, sentiment analysis, document analysis, multimodal sentiment analysis, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004
13516 Estimation of Geotechnical Parameters by Comparing Monitoring Data with Numerical Results: Case Study of Arash–Esfandiar-Niayesh Under-Passing Tunnel, Africa Tunnel, Tehran, Iran

Authors: Aliakbar Golshani, Seyyed Mehdi Poorhashemi, Mahsa Gharizadeh

Abstract:

The under passing tunnels are strongly influenced by the soils around. There are some complexities in the specification of real soil behavior, owing to the fact that lots of uncertainties exist in soil properties, and additionally, inappropriate soil constitutive models. Such mentioned factors may cause incompatible settlements in numerical analysis with the obtained values in actual construction. This paper aims to report a case study on a specific tunnel constructed by NATM. The tunnel has a depth of 11.4 m, height of 12.2 m, and width of 14.4 m with 2.5 lanes. The numerical modeling was based on a 2D finite element program. The soil material behavior was modeled by hardening soil model. According to the field observations, the numerical estimated settlement at the ground surface was approximately four times more than the measured one, after the entire installation of the initial lining, indicating that some unknown factors affect the values. Consequently, the geotechnical parameters are accurately revised by a numerical back-analysis using laboratory and field test data and based on the obtained monitoring data. The obtained result confirms that typically, the soil parameters are conservatively low-estimated. And additionally, the constitutive models cannot be applied properly for all soil conditions.

Keywords: NATM tunnel, initial lining, field test data, laboratory test data, monitoring data, numerical back-analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
13515 Automated Process Quality Monitoring with Prediction of Fault Condition Using Measurement Data

Authors: Hyun-Woo Cho

Abstract:

Detection of incipient abnormal events is important to improve safety and reliability of machine operations and reduce losses caused by failures. Improper set-ups or aligning of parts often leads to severe problems in many machines. The construction of prediction models for predicting faulty conditions is quite essential in making decisions on when to perform machine maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of machine measurement data. The calibration model is used to predict two faulty conditions from historical reference data. This approach utilizes genetic algorithms (GA) based variable selection, and we evaluate the predictive performance of several prediction methods using real data. The results shows that the calibration model based on supervised probabilistic principal component analysis (SPPCA) yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.

Keywords: Prediction, operation monitoring, on-line data, nonlinear statistical methods, empirical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
13514 The Effect of the Hourly Compensation on the Unemployment Rate: Comparative Analysis of United States, Canada and the United Kingdom Using Panel Data Regression Analysis

Authors: Ashiquer Rahman, Hares Mohammad, Ummey Salma

Abstract:

A country’s hourly compensation and unemployment rates are two of its most crucial components. They are not merely statistics but they have profound effects on individual, families, country, and the economy. They are inversely related to one another. The increased hourly compensation in the manufacturing sector can have a favorable effect on job changing issues. Moreover, the relationship between hourly compensation and unemployment is complex and influenced by broader economic factors. In this paper, in order to determine the effect of hourly compensation on unemployment rate, we use the panel data regression models and evaluate the expected link between hourly compensation and unemployment rate. We estimate the fixed effects model (FEM), evaluate the error components model (ECM), and determine which model (the FEM or ECM) is better through pooling all 60 observations. We then analyze and review the data by comparing countries (United States, Canada and the United Kingdom) using panel data regression models. Finally, we provide result, analysis and a summary of this extensive research on how the hourly compensation affects unemployment rate. Additionally, this paper offers relevant and useful guideline for the government and academic community to use an econometrics and social approach for the hourly compensation on unemployment rate to eliminate the problem.

Keywords: Hourly compensation, unemployment rate, panel data regression models, dummy variables, random effects model, fixed effects model, the linear regression model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67