Search results for: behavioural models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2566

Search results for: behavioural models

1726 Prediction of Watermelon Consumer Acceptability based on Vibration Response Spectrum

Authors: R.Abbaszadeh, A.Rajabipour, M.Delshad, M.J.Mahjub, H.Ahmadi

Abstract:

It is difficult to judge ripeness by outward characteristics such as size or external color. In this paper a nondestructive method was studied to determine watermelon (Crimson Sweet) quality. Responses of samples to excitation vibrations were detected using laser Doppler vibrometry (LDV) technology. Phase shift between input and output vibrations were extracted overall frequency range. First and second were derived using frequency response spectrums. After nondestructive tests, watermelons were sensory evaluated. So the samples were graded in a range of ripeness based on overall acceptability (total desired traits consumers). Regression models were developed to predict quality using obtained results and sample mass. The determination coefficients of the calibration and cross validation models were 0.89 and 0.71 respectively. This study demonstrated feasibility of information which is derived vibration response curves for predicting fruit quality. The vibration response of watermelon using the LDV method is measured without direct contact; it is accurate and timely, which could result in significant advantage for classifying watermelons based on consumer opinions.

Keywords: Laser Doppler vibrometry, Phase shift, Overallacceptability, Regression model , Resonance frequency, Watermelon

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2713
1725 Neural Networks-Based Acoustic Annoyance Model for Laptop Hard Disk Drive

Authors: Yi Chao Ma, Cheng Siong Chin, Wai Lok Woo

Abstract:

Since the last decade, there has been a rapid growth in digital multimedia, such as high-resolution media files and threedimentional movies. Hence, there is a need for large digital storage such as Hard Disk Drive (HDD). As such, users expect to have a quieter HDD in their laptop. In this paper, a jury test has been conducted on a group of 34 people where 17 of them are students who are the potential consumer, and the remaining are engineers who know the HDD. A total 13 HDD sound samples have been selected from over hundred HDD noise recordings. These samples are selected based on an agreed subjective feeling. The samples are played to the participants using head acoustic playback system, which enabled them to experience as similar as possible the same environment as have been recorded. Analysis has been conducted and the obtained results have indicated different group has different perception over the noises. Two neural network-based acoustic annoyance models are established based on back propagation neural network. Four psychoacoustic metrics, loudness, sharpness, roughness and fluctuation strength, are used as the input of the model, and the subjective evaluation results are taken as the output. The developed models are reasonably accurate in simulating both training and test samples.

Keywords: Hard disk drive noise, jury test, neural network model, psychoacoustic annoyance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
1724 MITOS-RCNN: Mitotic Figure Detection in Breast Cancer Histopathology Images Using Region Based Convolutional Neural Networks

Authors: Siddhant Rao

Abstract:

Studies estimate that there will be 266,120 new cases of invasive breast cancer and 40,920 breast cancer induced deaths in the year of 2018 alone. Despite the pervasiveness of this affliction, the current process to obtain an accurate breast cancer prognosis is tedious and time consuming. It usually requires a trained pathologist to manually examine histopathological images and identify the features that characterize various cancer severity levels. We propose MITOS-RCNN: a region based convolutional neural network (RCNN) geared for small object detection to accurately grade one of the three factors that characterize tumor belligerence described by the Nottingham Grading System: mitotic count. Other computational approaches to mitotic figure counting and detection do not demonstrate ample recall or precision to be clinically viable. Our models outperformed all previous participants in the ICPR 2012 challenge, the AMIDA 2013 challenge and the MITOS-ATYPIA-14 challenge along with recently published works. Our model achieved an F- measure score of 0.955, a 6.11% improvement in accuracy from the most accurate of the previously proposed models.

Keywords: Object detection, histopathology, breast cancer, mitotic count, deep learning, computer vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1414
1723 Application of Stochastic Models to Annual Extreme Streamflow Data

Authors: Karim Hamidi Machekposhti, Hossein Sedghi

Abstract:

This study was designed to find the best stochastic model (using of time series analysis) for annual extreme streamflow (peak and maximum streamflow) of Karkheh River at Iran. The Auto-regressive Integrated Moving Average (ARIMA) model used to simulate these series and forecast those in future. For the analysis, annual extreme streamflow data of Jelogir Majin station (above of Karkheh dam reservoir) for the years 1958–2005 were used. A visual inspection of the time plot gives a little increasing trend; therefore, series is not stationary. The stationarity observed in Auto-Correlation Function (ACF) and Partial Auto-Correlation Function (PACF) plots of annual extreme streamflow was removed using first order differencing (d=1) in order to the development of the ARIMA model. Interestingly, the ARIMA(4,1,1) model developed was found to be most suitable for simulating annual extreme streamflow for Karkheh River. The model was found to be appropriate to forecast ten years of annual extreme streamflow and assist decision makers to establish priorities for water demand. The Statistical Analysis System (SAS) and Statistical Package for the Social Sciences (SPSS) codes were used to determinate of the best model for this series.

Keywords: Stochastic models, ARIMA, extreme streamflow, Karkheh River.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 723
1722 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: Algorithmic trading, automated investment system, DAX Deutscher Aktienindex.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 696
1721 Modelling Sudoku Puzzles as Block-world Problems

Authors: Cecilia Nugraheni, Luciana Abednego

Abstract:

Sudoku is a kind of logic puzzles. Each puzzle consists of a board, which is a 9×9 cells, divided into nine 3×3 subblocks and a set of numbers from 1 to 9. The aim of this puzzle is to fill in every cell of the board with a number from 1 to 9 such that in every row, every column, and every subblock contains each number exactly one. Sudoku puzzles belong to combinatorial problem (NP complete). Sudoku puzzles can be solved by using a variety of techniques/algorithms such as genetic algorithms, heuristics, integer programming, and so on. In this paper, we propose a new approach for solving Sudoku which is by modelling them as block-world problems. In block-world problems, there are a number of boxes on the table with a particular order or arrangement. The objective of this problem is to change this arrangement into the targeted arrangement with the help of two types of robots. In this paper, we present three models for Sudoku. We modellized Sudoku as parameterized multi-agent systems. A parameterized multi-agent system is a multi-agent system which consists of several uniform/similar agents and the number of the agents in the system is stated as the parameter of this system. We use Temporal Logic of Actions (TLA) for formalizing our models.

Keywords: Sudoku puzzle, block world problem, parameterized multi agent systems modelling, Temporal Logic of Actions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2438
1720 Reducing Variation of Dyeing Process in Textile Manufacturing Industry

Authors: M. Zeydan, G. Toğa

Abstract:

This study deals with a multi-criteria optimization problem which has been transformed into a single objective optimization problem using Response Surface Methodology (RSM), Artificial Neural Network (ANN) and Grey Relational Analyses (GRA) approach. Grey-RSM and Grey-ANN are hybrid techniques which can be used for solving multi-criteria optimization problem. There have been two main purposes of this research as follows. 1. To determine optimum and robust fiber dyeing process conditions by using RSM and ANN based on GRA, 2. To obtain the best suitable model by comparing models developed by different methodologies. The design variables for fiber dyeing process in textile are temperature, time, softener, anti-static, material quantity, pH, retarder, and dispergator. The quality characteristics to be evaluated are nominal color consistency of fiber, maximum strength of fiber, minimum color of dyeing solution. GRA-RSM with exact level value, GRA-RSM with interval level value and GRA-ANN models were compared based on GRA output value and MSE (Mean Square Error) performance measurement of outputs with each other. As a result, GRA-ANN with interval value model seems to be suitable reducing the variation of dyeing process for GRA output value of the model.

Keywords: Artificial Neural Network, Grey Relational Analysis, Optimization, Response Surface Methodology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3555
1719 Dynamic Variational Multiscale LES of Bluff Body Flows on Unstructured Grids

Authors: Carine Moussaed, Stephen Wornom, Bruno Koobus, Maria Vittoria Salvetti, Alain Dervieux,

Abstract:

The effects of dynamic subgrid scale (SGS) models are investigated in variational multiscale (VMS) LES simulations of bluff body flows. The spatial discretization is based on a mixed finite element/finite volume formulation on unstructured grids. In the VMS approach used in this work, the separation between the largest and the smallest resolved scales is obtained through a variational projection operator and a finite volume cell agglomeration. The dynamic version of Smagorinsky and WALE SGS models are used to account for the effects of the unresolved scales. In the VMS approach, these effects are only modeled in the smallest resolved scales. The dynamic VMS-LES approach is applied to the simulation of the flow around a circular cylinder at Reynolds numbers 3900 and 20000 and to the flow around a square cylinder at Reynolds numbers 22000 and 175000. It is observed as in previous studies that the dynamic SGS procedure has a smaller impact on the results within the VMS approach than in LES. But improvements are demonstrated for important feature like recirculating part of the flow. The global prediction is improved for a small computational extra cost.

Keywords: variational multiscale LES, dynamic SGS model, unstructured grids, circular cylinder, square cylinder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1825
1718 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies

Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong

Abstract:

To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.

Keywords: Travel characteristics analysis, transportation choice, travel sharing rate, neural network model, traffic resource allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 617
1717 Phenomenological Ductile Fracture Criteria Applied to the Cutting Process

Authors: František Šebek, Petr Kubík, Jindřich Petruška, Jiří Hůlka

Abstract:

Present study is aimed on the cutting process of circular cross-section rods where the fracture is used to separate one rod into two pieces. Incorporating the phenomenological ductile fracture model into the explicit formulation of finite element method, the process can be analyzed without the necessity of realizing too many real experiments which could be expensive in case of repetitive testing in different conditions. In the present paper, the steel AISI 1045 was examined and the tensile tests of smooth and notched cylindrical bars were conducted together with biaxial testing of the notched tube specimens to calibrate material constants of selected phenomenological ductile fracture models. These were implemented into the Abaqus/Explicit through user subroutine VUMAT and used for cutting process simulation. As the calibration process is based on variables which cannot be obtained directly from experiments, numerical simulations of fracture tests are inevitable part of the calibration. Finally, experiments regarding the cutting process were carried out and predictive capability of selected fracture models is discussed. Concluding remarks then make the summary of gained experience both with the calibration and application of particular ductile fracture criteria.

Keywords: Ductile fracture, phenomenological criteria, cutting process, explicit formulation, AISI 1045 steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2592
1716 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection

Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay

Abstract:

With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.

Keywords: credit card fraud detection, user authentication, behavioral biometrics, machine learning, literature survey

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 545
1715 A Machine Learning Approach for Anomaly Detection in Environmental IoT-Driven Wastewater Purification Systems

Authors: Giovanni Cicceri, Roberta Maisano, Nathalie Morey, Salvatore Distefano

Abstract:

The main goal of this paper is to present a solution for a water purification system based on an Environmental Internet of Things (EIoT) platform to monitor and control water quality and machine learning (ML) models to support decision making and speed up the processes of purification of water. A real case study has been implemented by deploying an EIoT platform and a network of devices, called Gramb meters and belonging to the Gramb project, on wastewater purification systems located in Calabria, south of Italy. The data thus collected are used to control the wastewater quality, detect anomalies and predict the behaviour of the purification system. To this extent, three different statistical and machine learning models have been adopted and thus compared: Autoregressive Integrated Moving Average (ARIMA), Long Short Term Memory (LSTM) autoencoder, and Facebook Prophet (FP). The results demonstrated that the ML solution (LSTM) out-perform classical statistical approaches (ARIMA, FP), in terms of both accuracy, efficiency and effectiveness in monitoring and controlling the wastewater purification processes.

Keywords: EIoT, machine learning, anomaly detection, environment monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1027
1714 Kinetic Modeling of the Fischer-Tropsch Reactions and Modeling Steady State Heterogeneous Reactor

Authors: M. Ahmadi Marvast, M. Sohrabi, H. Ganji

Abstract:

The rate of production of main products of the Fischer-Tropsch reactions over Fe/HZSM5 bifunctional catalyst in a fixed bed reactor is investigated at a broad range of temperature, pressure, space velocity, H2/CO feed molar ratio and CO2, CH4 and water flow rates. Model discrimination and parameter estimation were performed according to the integral method of kinetic analysis. Due to lack of mechanism development for Fisher – Tropsch Synthesis on bifunctional catalysts, 26 different models were tested and the best model is selected. Comprehensive one and two dimensional heterogeneous reactor models are developed to simulate the performance of fixed-bed Fischer – Tropsch reactors. To reduce computational time for optimization purposes, an Artificial Feed Forward Neural Network (AFFNN) has been used to describe intra particle mass and heat transfer diffusion in the catalyst pellet. It is seen that products' reaction rates have direct relation with H2 partial pressure and reverse relation with CO partial pressure. The results show that the hybrid model has good agreement with rigorous mechanistic model, favoring that the hybrid model is about 25-30 times faster.

Keywords: Fischer-Tropsch, heterogeneous modeling, kinetic study.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2820
1713 An Application for Risk of Crime Prediction Using Machine Learning

Authors: Luis Fonseca, Filipe Cabral Pinto, Susana Sargento

Abstract:

The increase of the world population, especially in large urban centers, has resulted in new challenges particularly with the control and optimization of public safety. Thus, in the present work, a solution is proposed for the prediction of criminal occurrences in a city based on historical data of incidents and demographic information. The entire research and implementation will be presented start with the data collection from its original source, the treatment and transformations applied to them, choice and the evaluation and implementation of the Machine Learning model up to the application layer. Classification models will be implemented to predict criminal risk for a given time interval and location. Machine Learning algorithms such as Random Forest, Neural Networks, K-Nearest Neighbors and Logistic Regression will be used to predict occurrences, and their performance will be compared according to the data processing and transformation used. The results show that the use of Machine Learning techniques helps to anticipate criminal occurrences, which contributed to the reinforcement of public security. Finally, the models were implemented on a platform that will provide an API to enable other entities to make requests for predictions in real-time. An application will also be presented where it is possible to show criminal predictions visually.

Keywords: Crime prediction, machine learning, public safety, smart city.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1326
1712 Landslide Susceptibility Mapping: A Comparison between Logistic Regression and Multivariate Adaptive Regression Spline Models in the Municipality of Oudka, Northern of Morocco

Authors: S. Benchelha, H. C. Aoudjehane, M. Hakdaoui, R. El Hamdouni, H. Mansouri, T. Benchelha, M. Layelmam, M. Alaoui

Abstract:

The logistic regression (LR) and multivariate adaptive regression spline (MarSpline) are applied and verified for analysis of landslide susceptibility map in Oudka, Morocco, using geographical information system. From spatial database containing data such as landslide mapping, topography, soil, hydrology and lithology, the eight factors related to landslides such as elevation, slope, aspect, distance to streams, distance to road, distance to faults, lithology map and Normalized Difference Vegetation Index (NDVI) were calculated or extracted. Using these factors, landslide susceptibility indexes were calculated by the two mentioned methods. Before the calculation, this database was divided into two parts, the first for the formation of the model and the second for the validation. The results of the landslide susceptibility analysis were verified using success and prediction rates to evaluate the quality of these probabilistic models. The result of this verification was that the MarSpline model is the best model with a success rate (AUC = 0.963) and a prediction rate (AUC = 0.951) higher than the LR model (success rate AUC = 0.918, rate prediction AUC = 0.901).

Keywords: Landslide susceptibility mapping, regression logistic, multivariate adaptive regression spline, Oudka, Taounate, Morocco.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 992
1711 Study of Aero-thermal Effects with Heat Radiation in Optical Side Window

Authors: Chun-Chi Li, Da-Wei Huang, Yin-Chia Su, Liang-Chih Tasi

Abstract:

In hypersonic environments, the aerothermal effect makes it difficult for the optical side windows of optical guided missiles to withstand high heat. This produces cracking or breaking, resulting in an inability to function. This study used computational fluid mechanics to investigate the external cooling jet conditions of optical side windows. The turbulent models k-ε and k-ω were simulated. To be in better accord with actual aerothermal environments, a thermal radiation model was added to examine suitable amounts of external coolants and the optical window problems of aero-thermodynamics. The simulation results indicate that when there are no external cooling jets, because airflow on the optical window and the tail groove produce vortices, the temperatures in these two locations reach a peak of approximately 1600 K. When the external cooling jets worked at 0.15 kg/s, the surface temperature of the optical windows dropped to approximately 280 K. When adding thermal radiation conditions, because heat flux dissipation was faster, the surface temperature of the optical windows fell from 280 K to approximately 260 K. The difference in influence of the different turbulence models k-ε and k-ω on optical window surface temperature was not significant.

Keywords: aero-optical side window, aerothermal effect, cooling, hypersonic flow

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3116
1710 QSAR Studies of Certain Novel Heterocycles Derived from Bis-1, 2, 4 Triazoles as Anti-Tumor Agents

Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi

Abstract:

In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.

Keywords: 3D QSAR, CoMSIA, Triazoles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480
1709 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region

Authors: Mohammad Bakhshi, Firas Al Janabi

Abstract:

High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.

Keywords: DiMoN tool, disaggregation, exceedance probability, Kolmogorov-Smirnov Test, rainfall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1007
1708 Estimation of Time -Varying Linear Regression with Unknown Time -Volatility via Continuous Generalization of the Akaike Information Criterion

Authors: Elena Ezhova, Vadim Mottl, Olga Krasotkina

Abstract:

The problem of estimating time-varying regression is inevitably concerned with the necessity to choose the appropriate level of model volatility - ranging from the full stationarity of instant regression models to their absolute independence of each other. In the stationary case the number of regression coefficients to be estimated equals that of regressors, whereas the absence of any smoothness assumptions augments the dimension of the unknown vector by the factor of the time-series length. The Akaike Information Criterion is a commonly adopted means of adjusting a model to the given data set within a succession of nested parametric model classes, but its crucial restriction is that the classes are rigidly defined by the growing integer-valued dimension of the unknown vector. To make the Kullback information maximization principle underlying the classical AIC applicable to the problem of time-varying regression estimation, we extend it onto a wider class of data models in which the dimension of the parameter is fixed, but the freedom of its values is softly constrained by a family of continuously nested a priori probability distributions.

Keywords: Time varying regression, time-volatility of regression coefficients, Akaike Information Criterion (AIC), Kullback information maximization principle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
1707 Simultaneous HPAM/SDS Injection in Heterogeneous/Layered Models

Authors: M. H. Sedaghat, A. Zamani, S. Morshedi, R. Janamiri, M. Safdari, I. Mahdavi, A. Hosseini, A. Hatampour

Abstract:

Although lots of experiments have been done in enhanced oil recovery, the number of experiments which consider the effects of local and global heterogeneity on efficiency of enhanced oil recovery based on the polymer-surfactant flooding is low and rarely done. In this research, we have done numerous experiments of water flooding and polymer-surfactant flooding on a five spot glass micromodel in different conditions such as different positions of layers. In these experiments, five different micromodels with three different pore structures are designed. Three models with different layer orientation, one homogenous model and one heterogeneous model are designed. In order to import the effect of heterogeneity of porous media, three types of pore structures are distributed accidentally and with equal ratio throughout heterogeneous micromodel network according to random normal distribution. The results show that maximum EOR recovery factor will happen in a situation where the layers are orthogonal to the path of mainstream and the minimum EOR recovery factor will happen in a situation where the model is heterogeneous. This experiments show that in polymer-surfactant flooding, with increase of angles of layers the EOR recovery factor will increase and this recovery factor is strongly affected by local heterogeneity around the injection zone.

Keywords: Layered Reservoir, Micromodel, Local Heterogeneity, Polymer-Surfactant Flooding, Enhanced Oil Recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2219
1706 Students- Perception of the Evaluation System in Architecture Studios

Authors: Badiossadat Hassanpour, Nangkula Utaberta, Azami Zaharim, Nurakmal Goh Abdullah

Abstract:

Architecture education was based on apprenticeship models and its nature has not changed much during long period but the Source of changes was its evaluation process and system. It is undeniable that art and architecture education is completely based on transmitting knowledge from instructor to students. In contrast to other majors this transmitting is by iteration and practice and studio masters try to control the design process and improving skills in the form of supervision and criticizing. Also the evaluation will end by giving marks to students- achievements. Therefore the importance of the evaluation and assessment role is obvious and it is not irrelevant to say that if we want to know about the architecture education system, we must first study its assessment procedures. The evolution of these changes in western countries has literate and documented well. However it seems that this procedure has unregarded in Malaysia and there is a severe lack of research and documentation in this area. Malaysia as an under developing and multicultural country which is involved different races and cultures is a proper origin for scrutinizing and understanding the evaluation systems and acceptability amount of current implemented models to keep the evaluation and assessment procedure abreast with needs of different generations, cultures and even genders. This paper attempts to answer the questions of how evaluation and assessments are performed and how students perceive this evaluation system in the context Malaysia. The main advantage of this work is that it contributes in international debate on evaluation model.

Keywords: Architecture, assessment, design studio, learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2880
1705 Domain Driven Design vs Soft Domain Driven Design Frameworks

Authors: Mohammed Salahat, Steve Wade

Abstract:

This paper presents and compares the SSDDD “Systematic Soft Domain Driven Design Framework” to DDD “Domain Driven Design Framework” as a soft system approach of information systems development. The framework use SSM as a guiding methodology within which we have embedded a sequence of design tasks based on the UML leading to the implementation of a software system using the Naked Objects framework. This framework has been used in action research projects that have involved the investigation and modelling of business processes using object-oriented domain models and the implementation of software systems based on those domain models. Within this framework, Soft Systems Methodology (SSM) is used as a guiding methodology to explore the problem situation and to develop the domain model using UML for the given business domain. The framework is proposed and evaluated in our previous works, a comparison between SSDDD and DDD is presented in this paper, to show how SSDDD improved DDD as an approach to modelling and implementing business domain perspectives for Information Systems Development. The comparison process, the results, and the improvements are presented in the following sections of this paper.

Keywords: SSM, UML, domain-driven design, soft domain-driven design, naked objects, soft language, information retrieval, multimethodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
1704 Environmental Effects on Energy Consumption of Smart Grid Consumers

Authors: S. M. Ali, A. Salam Khan, A. U. Khan, M. Tariq, M. S. Hussain, B. A. Abbasi, I. Hussain, U. Farid

Abstract:

Environment and surrounding plays a pivotal rule in structuring life-style of the consumers. Living standards intern effect the energy consumption of the consumers. In smart grid paradigm, climate drifts, weather parameter and green environmental directly relates to the energy profiles of the various consumers, such as residential, commercial and industrial. Considering above factors helps policy in shaping utility load curves and optimal management of demand and supply. Thus, there is a pressing need to develop correlation models of load and weather parameters and critical analysis of the factors effecting energy profiles of smart grid consumers. In this paper, we elaborated various environment and weather parameter factors effecting demand of consumers. Moreover, we developed correlation models, such as Pearson, Spearman, and Kendall, an inter-relation between dependent (load) parameter and independent (weather) parameters. Furthermore, we validated our discussion with real-time data of Texas State. The numerical simulations proved the effective relation of climatic drifts with energy consumption of smart grid consumers.

Keywords: Climatic drifts, correlation analysis, energy consumption, smart grid, weather parameter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780
1703 Emulation of a Wind Turbine Using Induction Motor Driven by Field Oriented Control

Authors: L. Benaaouinate, M. Khafallah, A. Martinez, A. Mesbahi, T. Bouragba

Abstract:

This paper concerns with the modeling, simulation, and emulation of a wind turbine emulator for standalone wind energy conversion systems. By using emulation system, we aim to reproduce the dynamic behavior of the wind turbine torque on the generator shaft: it provides the testing facilities to optimize generator control strategies in a controlled environment, without reliance on natural resources. The aerodynamic, mechanical, electrical models have been detailed as well as the control of pitch angle using Fuzzy Logic for horizontal axis wind turbines. The wind turbine emulator consists mainly of an induction motor with AC power drive with torque control. The control of the induction motor and the mathematical models of the wind turbine are designed with MATLAB/Simulink environment. The simulation results confirm the effectiveness of the induction motor control system and the functionality of the wind turbine emulator for providing all necessary parameters of the wind turbine system such as wind speed, output torque, power coefficient and tip speed ratio. The findings are of direct practical relevance.

Keywords: Wind turbine, modeling, emulator, electrical generator, renewable energy, induction motor drive, field oriented control, real time control, wind turbine emulator, pitch angle control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1375
1702 Effect of Relative Permeability on Well Testing Behavior of Naturally Fractured Lean Gas Condensate Reservoirs

Authors: G.H. Montazeri, Z. Dastkhan, H. Aliabadi

Abstract:

Gas condensate Reservoirs show complicated thermodynamic behavior when their pressure reduces to under dew point pressure. Condensate blockage around the producing well cause significant reduction of production rate as well bottom-hole pressure drops below saturation pressure. The main objective of this work was to examine the well test analysis of naturally fractured lean gas condensate reservoir and investigate the effect of condensate formed around the well-bore on behavior of single phase pseudo pressure and its derivative curves. In this work a naturally fractured lean gas condensate reservoir is simulated with compositional simulator. Different sensitivity analysis done on Corry parameters and result of simulator is feed to analytical well testing software. For consideration of these phenomena eighteen compositional models with Capillary number effect are constructed. Matrix relative permeability obeys Corry relative permeability and relative permeability in fracture is linear. Well testing behavior of these models are studied and interpreted. Results show different sensitivity analysis on relative permeability of matrix does not have strong effect on well testing behavior even most part of the matrix around the well is occupied with condensate.

Keywords: Lean gas, fractured condensate reservoir, capillary number, well testing analysis, relative permeability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2970
1701 End-to-End Spanish-English Sequence Learning Translation Model

Authors: Vidhu Mitha Goutham, Ruma Mukherjee

Abstract:

The low availability of well-trained, unlimited, dynamic-access models for specific languages makes it hard for corporate users to adopt quick translation techniques and incorporate them into product solutions. As translation tasks increasingly require a dynamic sequence learning curve; stable, cost-free opensource models are scarce. We survey and compare current translation techniques and propose a modified sequence to sequence model repurposed with attention techniques. Sequence learning using an encoder-decoder model is now paving the path for higher precision levels in translation. Using a Convolutional Neural Network (CNN) encoder and a Recurrent Neural Network (RNN) decoder background, we use Fairseq tools to produce an end-to-end bilingually trained Spanish-English machine translation model including source language detection. We acquire competitive results using a duo-lingo-corpus trained model to provide for prospective, ready-made plug-in use for compound sentences and document translations. Our model serves a decent system for large, organizational data translation needs. While acknowledging its shortcomings and future scope, it also identifies itself as a well-optimized deep neural network model and solution.

Keywords: Attention, encoder-decoder, Fairseq, Seq2Seq, Spanish, translation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 475
1700 ECG-Based Heartbeat Classification Using Convolutional Neural Networks

Authors: Jacqueline R. T. Alipo-on, Francesca I. F. Escobar, Myles J. T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases which are considered as one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis on the ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heart beat types. The dataset used in this work is the synthetic MIT-Beth Israel Hospital (MIT-BIH) Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.

Keywords: Heartbeat classification, convolutional neural network, electrocardiogram signals, ECG signals, generative adversarial networks, long short-term memory, LSTM, ResNet-50.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190
1699 Finite Element Analysis of Sheet Metal Airbending Using Hyperform LS-DYNA

Authors: Himanshu V. Gajjar, Anish H. Gandhi, Harit K. Raval

Abstract:

Air bending is one of the important metal forming processes, because of its simplicity and large field application. Accuracy of analytical and empirical models reported for the analysis of bending processes is governed by simplifying assumption and do not consider the effect of dynamic parameters. Number of researches is reported on the finite element analysis (FEA) of V-bending, Ubending, and air V-bending processes. FEA of bending is found to be very sensitive to many physical and numerical parameters. FE models must be computationally efficient for practical use. Reported work shows the 3D FEA of air bending process using Hyperform LSDYNA and its comparison with, published 3D FEA results of air bending in Ansys LS-DYNA and experimental results. Observing the planer symmetry and based on the assumption of plane strain condition, air bending problem was modeled in 2D with symmetric boundary condition in width. Stress-strain results of 2D FEA were compared with 3D FEA results and experiments. Simplification of air bending problem from 3D to 2D resulted into tremendous reduction in the solution time with only marginal effect on stressstrain results. FE model simplification by studying the problem symmetry is more efficient and practical approach for solution of more complex large dimensions slow forming processes.

Keywords: Air V-bending, Finite element analysis, HyperformLS-DYNA, Planner symmetry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3209
1698 Vibration Suppression of Timoshenko Beams with Embedded Piezoelectrics Using POF

Authors: T. C. Manjunath, B. Bandyopadhyay

Abstract:

This paper deals with the design of a periodic output feedback controller for a flexible beam structure modeled with Timoshenko beam theory, Finite Element Method, State space methods and embedded piezoelectrics concept. The first 3 modes are considered in modeling the beam. The main objective of this work is to control the vibrations of the beam when subjected to an external force. Shear piezoelectric sensors and actuators are embedded into the top and bottom layers of a flexible aluminum beam structure, thus making it intelligent and self-adaptive. The composite beam is divided into 5 finite elements and the control actuator is placed at finite element position 1, whereas the sensor is varied from position 2 to 5, i.e., from the nearby fixed end to the free end. 4 state space SISO models are thus developed. Periodic Output Feedback (POF) Controllers are designed for the 4 SISO models of the same plant to control the flexural vibrations. The effect of placing the sensor at different locations on the beam is observed and the performance of the controller is evaluated for vibration control. Conclusions are finally drawn.

Keywords: Smart structure, Timoshenko beam theory, Periodic output feedback control, Finite Element Method, State space model, SISO, Embedded sensors and actuators, Vibration control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134
1697 An Empirical Evaluation of Performance of Machine Learning Techniques on Imbalanced Software Quality Data

Authors: Ruchika Malhotra, Megha Khanna

Abstract:

The development of change prediction models can help the software practitioners in planning testing and inspection resources at early phases of software development. However, a major challenge faced during the training process of any classification model is the imbalanced nature of the software quality data. A data with very few minority outcome categories leads to inefficient learning process and a classification model developed from the imbalanced data generally does not predict these minority categories correctly. Thus, for a given dataset, a minority of classes may be change prone whereas a majority of classes may be non-change prone. This study explores various alternatives for adeptly handling the imbalanced software quality data using different sampling methods and effective MetaCost learners. The study also analyzes and justifies the use of different performance metrics while dealing with the imbalanced data. In order to empirically validate different alternatives, the study uses change data from three application packages of open-source Android data set and evaluates the performance of six different machine learning techniques. The results of the study indicate extensive improvement in the performance of the classification models when using resampling method and robust performance measures.

Keywords: Change proneness, empirical validation, imbalanced learning, machine learning techniques, object-oriented metrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520