Search results for: 3-D electronic models.
2464 Time Series Forecasting Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed length window in the past as an explicit input. In this paper, we study how the performance of predictive models change as a function of different look-back window sizes and different amounts of time to predict into the future. We also consider the performance of the recent attention-based transformer models, which had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), Gated Recurrent Units (GRU), and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the website of University of California, Irvine (UCI), which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Absolute Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.
Keywords: Air quality prediction, deep learning algorithms, time series forecasting, look-back window.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11722463 A New Dimension in Software Risk Managment
Authors: Masood Uzzafer
Abstract:
A dynamic risk management framework for software projects is presented. Currently available software risk management frameworks and risk assessment models are static in nature and lacks feedback capability. Such risk management frameworks are not capable of providing the risk assessment of futuristic changes in risk events. A dynamic risk management framework for software project is needed that provides futuristic assessment of risk events.Keywords: Software Risk Management, Dynamic Models, Software Project Managment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17402462 A Block World Problem Based Sudoku Solver
Authors: Luciana Abednego, Cecilia Nugraheni
Abstract:
There are many approaches proposed for solving Sudoku puzzles. One of them is by modelling the puzzles as block world problems. There have been three model for Sudoku solvers based on this approach. Each model expresses Sudoku solver as a parameterized multi agent systems. In this work, we propose a new model which is an improvement over the existing models. This paper presents the development of a Sudoku solver that implements all the proposed models. Some experiments have been conducted to determine the performance of each model.
Keywords: Sudoku puzzle, Sudoku solver, block world problem, parameterized multi agent systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22432461 Shape Restoration of the Left Ventricle
Authors: May-Ling Tan, Yi Su, Chi-Wan Lim, Liang Zhong, Ru-San Tan
Abstract:
This paper describes an automatic algorithm to restore the shape of three-dimensional (3D) left ventricle (LV) models created from magnetic resonance imaging (MRI) data using a geometry-driven optimization approach. Our basic premise is to restore the LV shape such that the LV epicardial surface is smooth after the restoration. A geometrical measure known as the Minimum Principle Curvature (κ2) is used to assess the smoothness of the LV. This measure is used to construct the objective function of a two-step optimization process. The objective of the optimization is to achieve a smooth epicardial shape by iterative in-plane translation of the MRI slices. Quantitatively, this yields a minimum sum in terms of the magnitude of κ 2, when κ2 is negative. A limited memory quasi-Newton algorithm, L-BFGS-B, is used to solve the optimization problem. We tested our algorithm on an in vitro theoretical LV model and 10 in vivo patient-specific models which contain significant motion artifacts. The results show that our method is able to automatically restore the shape of LV models back to smoothness without altering the general shape of the model. The magnitudes of in-plane translations are also consistent with existing registration techniques and experimental findings.Keywords: Magnetic Resonance Imaging, Left Ventricle, ShapeRestoration, Principle Curvature, Optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16402460 Forecasting Tala-AUD and Tala-USD Exchange Rates with ANN
Authors: Shamsuddin Ahmed, M. G. M. Khan, Biman Prasad, Avlin Prasad
Abstract:
The focus of this paper is to construct daily time series exchange rate forecast models of Samoan Tala/USD and Tala/AUD during the year 2008 to 2012 with neural network The performance of the models was measured by using varies error functions such as Root Square mean error (RSME), Mean absolute error (MAE), and Mean absolute percentage error (MAPE). Our empirical findings suggest that AR (1) model is an effective tool to forecast the Tala/USD and Tala/AUD.Keywords: Neural Network Forecasting Model, Autoregressive time series, Exchange rate, Tala/AUD, winters model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24342459 Mathematical Models of Flow Shop and Job Shop Scheduling Problems
Authors: Miloš Šeda
Abstract:
In this paper, mathematical models for permutation flow shop scheduling and job shop scheduling problems are proposed. The first problem is based on a mixed integer programming model. As the problem is NP-complete, this model can only be used for smaller instances where an optimal solution can be computed. For large instances, another model is proposed which is suitable for solving the problem by stochastic heuristic methods. For the job shop scheduling problem, a mathematical model and its main representation schemes are presented.
Keywords: Flow shop, job shop, mixed integer model, representation scheme.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46782458 Influence of Behavior Models on the Response of a Reinforced Concrete Frame: Multi-Fiber Approach
Authors: A. Kahil, A. Nekmouche, N. Khelil, I. Hamadou, M. Hamizi, Ne. Hannachi
Abstract:
The objective of this work is to study the influence of the nonlinear behavior models of the concrete (concrete_BAEL and concrete_UNI) as well as the confinement brought by the transverse reinforcement on the seismic response of reinforced concrete frame (RC/frame). These models as well as the confinement are integrated in the Cast3m finite element calculation code. The consideration of confinement (TAC, taking into account the confinement) provided by the transverse reinforcement and the non-consideration of confinement (without consideration of containment, WCC) in the presence and absence of a vertical load is studied. The application was made on a reinforced concrete frame (RC/frame) with 3 levels and 2 spans. The results show that on the one hand, the concrete_BAEL model slightly underestimates the resistance of the RC/frame in the plastic field, whereas the concrete_uni model presents the best results compared to the simplified model "concrete_BAEL", on the other hand, for the concrete-uni model, taking into account the confinement has no influence on the behavior of the RC/frame under imposed displacement up to a vertical load of 500 KN.
Keywords: Reinforced concrete, nonlinear calculation, behavior laws, fiber model confinement, numerical simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6802457 Wave-Structure Interaction for Submerged Quarter-Circle Breakwaters of Different Radii - Reflection Characteristics
Authors: Arkal Vittal Hegde, L. Ravikiran
Abstract:
The paper presents the results of a series of experiments conducted on physical models of Quarter-circle breakwater (QBW) in a two dimensional monochromatic wave flume. The purpose of the experiments was to evaluate the reflection coefficient Kr of QBW models of different radii (R) for different submergence ratios (d/hc), where d is the depth of water and hc is the height of the breakwater crest from the sea bed. The radii of the breakwater models studied were 20cm, 22.5cm, 25cm, 27.5cm and submergence ratios used varied from 1.067 to 1.667. The wave climate off the Mangalore coast was used for arriving at the various model wave parameters. The incident wave heights (Hi) used in the flume varied from 3 to 18cm, and wave periods (T) ranged from 1.2 s to 2.2 s. The water depths (d) of 40cm, 45cm and 50cm were used in the experiments. The data collected was analyzed to compute variation of reflection coefficient Kr=Hr/Hi (where Hr=reflected wave height) with the wave steepness Hi/gT2 for various R/Hi (R=breakwater radius) values. It was found that the reflection coefficient increased as incident wave steepness increased. Also as wave height decreases reflection coefficient decreases and as structure radius R increased Kr decreased slightly.
Keywords: Incident wave steepness, Quarter-circle breakwater, Reflection coefficient, Submergence ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17952456 Meteorological Data Study and Forecasting Using Particle Swarm Optimization Algorithm
Authors: S. Esfandeh, M. Sedighizadeh
Abstract:
Weather systems use enormously complex combinations of numerical tools for study and forecasting. Unfortunately, due to phenomena in the world climate, such as the greenhouse effect, classical models may become insufficient mostly because they lack adaptation. Therefore, the weather forecast problem is matched for heuristic approaches, such as Evolutionary Algorithms. Experimentation with heuristic methods like Particle Swarm Optimization (PSO) algorithm can lead to the development of new insights or promising models that can be fine tuned with more focused techniques. This paper describes a PSO approach for analysis and prediction of data and provides experimental results of the aforementioned method on realworld meteorological time series.Keywords: Weather, Climate, PSO, Prediction, Meteorological
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20782455 The Link between Unemployment and Inflation Using Johansen’s Co-Integration Approach and Vector Error Correction Modelling
Authors: Sagaren Pillay
Abstract:
In this paper bi-annual time series data on unemployment rates (from the Labour Force Survey) are expanded to quarterly rates and linked to quarterly unemployment rates (from the Quarterly Labour Force Survey). The resultant linked series and the consumer price index (CPI) series are examined using Johansen’s cointegration approach and vector error correction modeling. The study finds that both the series are integrated of order one and are cointegrated. A statistically significant co-integrating relationship is found to exist between the time series of unemployment rates and the CPI. Given this significant relationship, the study models this relationship using Vector Error Correction Models (VECM), one with a restriction on the deterministic term and the other with no restriction.
A formal statistical confirmation of the existence of a unique linear and lagged relationship between inflation and unemployment for the period between September 2000 and June 2011 is presented. For the given period, the CPI was found to be an unbiased predictor of the unemployment rate. This relationship can be explored further for the development of appropriate forecasting models incorporating other study variables.
Keywords: Forecasting, lagged, linear, relationship.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25432454 Values as a Predictor of Cyber-bullying Among Secondary School Students
Authors: Bülent Dilmaç, Didem Aydoğan
Abstract:
The use of new technologies such internet (e-mail, chat rooms) and cell phones has steeply increased in recent years. Especially among children and young people, use of technological tools and equipments is widespread. Although many teachers and administrators now recognize the problem of school bullying, few are aware that students are being harassed through electronic communication. Referred to as electronic bullying, cyber bullying, or online social cruelty, this phenomenon includes bullying through email, instant messaging, in a chat room, on a website, or through digital messages or images sent to a cell phone. Cyber bullying is defined as causing deliberate/intentional harm to others using internet or other digital technologies. It has a quantitative research design nd uses relational survey as its method. The participants consisted of 300 secondary school students in the city of Konya, Turkey. 195 (64.8%) participants were female and 105 (35.2%) were male. 39 (13%) students were at grade 1, 187 (62.1%) were at grade 2 and 74 (24.6%) were at grade 3. The “Cyber Bullying Question List" developed by Ar─▒cak (2009) was given to students. Following questions about demographics, a functional definition of cyber bullying was provided. In order to specify students- human values, “Human Values Scale (HVS)" developed by Dilmaç (2007) for secondary school students was administered. The scale consists of 42 items in six dimensions. Data analysis was conducted by the primary investigator of the study using SPSS 14.00 statistical analysis software. Descriptive statistics were calculated for the analysis of students- cyber bullying behaviour and simple regression analysis was conducted in order to test whether each value in the scale could explain cyber bullying behaviour.Keywords: Cyber bullying, Values, Secondary SchoolStudents
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38262453 Vibration Attenuation Using Functionally Graded Material
Authors: Saeed Asiri, Hassan Hedia, Wael Eissa
Abstract:
The aim of the work was to attenuate the vibration amplitude in CESNA 172 airplane wing by using Functionally Graded Material instead of uniform or composite material. Wing strength was achieved by means of stress analysis study, while wing vibration amplitudes and shapes were achieved by means of Modal and Harmonic analysis. Results were verified by applying the methodology in a simple cantilever plate to the simple model and the results were promising and the same methodology can be applied to the airplane wing model. Aluminum models, Titanium models, and functionally graded materials of Aluminum and titanium results were compared to show a great vibration attenuation after using the FGM. Optimization in FGM gradation satisfied our objective of reducing and attenuating the vibration amplitudes to show the effect of using FGM in vibration behavior. Testing the Aluminum rich models, and comparing it with the titanium rich model was an optimization in this paper. Results have shown a significant attenuation in vibration magnitudes when using FGM instead of Titanium Plate, and Aluminium wing with FGM Spurs instead of Aluminium wings. It was also recommended that in future, changing the graphical scale to 1:10 or even 1:1 when the computers- capabilities allow.
Keywords: Vibration, Attenuation, FGM, ANSYS2011, FEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31342452 A Java Based Discrete Event Simulation Library
Authors: Brahim Belattar, Abdelhabib Bourouis
Abstract:
This paper describes important features of JAPROSIM, a free and open source simulation library implemented in Java programming language. It provides a framework for building discrete event simulation models. The process interaction world view adopted by JAPROSIM is discussed. We present the architecture and major components of the simulation library. A pedagogical example is given in order to illustrate how to use JAPROSIM for building discrete event simulation models. Further motivations are discussed and suggestions for improving our work are given.
Keywords: Discrete Event Simulation, Object-Oriented Simulation, JAPROSIM, Process Interaction Worldview, Java-based modeling and simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38042451 Comparative Approach of Measuring Price Risk on Romanian and International Wheat Market
Authors: Larisa N. Pop, Irina M. Ban
Abstract:
This paper aims to present the main instruments used in the economic literature for measuring the price risk, pointing out on the advantages brought by the conditional variance in this respect. The theoretical approach will be exemplified by elaborating an EGARCH model for the price returns of wheat, both on Romanian and on international market. To our knowledge, no previous empirical research, either on price risk measurement for the Romanian markets or studies that use the ARIMA-EGARCH methodology, have been conducted. After estimating the corresponding models, the paper will compare the estimated conditional variance on the two markets.Keywords: conditional variance, GARCH models, price risk, volatility
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14492450 Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration
Authors: Payam Haghparast, Mikhail V. Sorin, Hakim Nesreddine
Abstract:
The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.
Keywords: 1D model, constant area-mixing, constant pressure-mixing, normal shock location, ejector dimensions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9552449 Mathematical Modeling of Machining Parameters in Electrical Discharge Machining of FW4 Welded Steel
Authors: M.R.Shabgard, R.M.Shotorbani
Abstract:
FW4 is a newly developed hot die material widely used in Forging Dies manufacturing. The right selection of the machining conditions is one of the most important aspects to take into consideration in the Electrical Discharge Machining (EDM) of FW4. In this paper an attempt has been made to develop mathematical models for relating the Material Removal Rate (MRR), Tool Wear Ratio (TWR) and surface roughness (Ra) to machining parameters (current, pulse-on time and voltage). Furthermore, a study was carried out to analyze the effects of machining parameters in respect of listed technological characteristics. The results of analysis of variance (ANOVA) indicate that the proposed mathematical models, can adequately describe the performance within the limits of the factors being studied.Keywords: Electrical Discharge Machining (EDM), linearregression technique, Response Surface Methodology (RSM)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19172448 Using Simulation Modeling Approach to Predict USMLE Steps 1 and 2 Performances
Authors: Chau-Kuang Chen, John Hughes, Jr., A. Dexter Samuels
Abstract:
The prediction models for the United States Medical Licensure Examination (USMLE) Steps 1 and 2 performances were constructed by the Monte Carlo simulation modeling approach via linear regression. The purpose of this study was to build robust simulation models to accurately identify the most important predictors and yield the valid range estimations of the Steps 1 and 2 scores. The application of simulation modeling approach was deemed an effective way in predicting student performances on licensure examinations. Also, sensitivity analysis (a/k/a what-if analysis) in the simulation models was used to predict the magnitudes of Steps 1 and 2 affected by changes in the National Board of Medical Examiners (NBME) Basic Science Subject Board scores. In addition, the study results indicated that the Medical College Admission Test (MCAT) Verbal Reasoning score and Step 1 score were significant predictors of the Step 2 performance. Hence, institutions could screen qualified student applicants for interviews and document the effectiveness of basic science education program based on the simulation results.Keywords: Prediction Model, Sensitivity Analysis, Simulation Method, USMLE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14612447 Comparison of Alternative Models to Predict Lean Meat Percentage of Lamb Carcasses
Authors: Vasco A. P. Cadavez, Fernando C. Monteiro
Abstract:
The objective of this study was to develop and compare alternative prediction equations of lean meat proportion (LMP) of lamb carcasses. Forty (40) male lambs, 22 of Churra Galega Bragançana Portuguese local breed and 18 of Suffolk breed were used. Lambs were slaughtered, and carcasses weighed approximately 30 min later in order to obtain hot carcass weight (HCW). After cooling at 4º C for 24-h a set of seventeen carcass measurements was recorded. The left side of carcasses was dissected into muscle, subcutaneous fat, inter-muscular fat, bone, and remainder (major blood vessels, ligaments, tendons, and thick connective tissue sheets associated with muscles), and the LMP was evaluated as the dissected muscle percentage. Prediction equations of LMP were developed, and fitting quality was evaluated through the coefficient of determination of estimation (R2 e) and standard error of estimate (SEE). Models validation was performed by k-fold crossvalidation and the coefficient of determination of prediction (R2 p) and standard error of prediction (SEP) were computed. The BT2 measurement was the best single predictor and accounted for 37.8% of the LMP variation with a SEP of 2.30%. The prediction of LMP of lamb carcasses can be based simple models, using as predictors the HCW and one fat thickness measurement.
Keywords: Bootstrap, Carcass, Lambs, Lean meat
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16212446 Why Traditional Technology Acceptance Models Won't Work for Future Information Technologies?
Authors: Carsten Röcker
Abstract:
This paper illustrates why existing technology acceptance models are only of limited use for predicting and explaining the adoption of future information and communication technologies. It starts with a general overview over technology adoption processes, and presents several theories for the acceptance as well as adoption of traditional information technologies. This is followed by an overview over the recent developments in the area of information and communication technologies. Based on the arguments elaborated in these sections, it is shown why the factors used to predict adoption in existing systems, will not be sufficient for explaining the adoption of future information and communication technologies.Keywords: Technology Diffusion, Technology AcceptanceModels, Ambient Intelligence, Ubiquitous and Pervasive Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24262445 Quantification of E-Waste: A Case Study in Federal University of Espírito Santo, Brazil
Authors: Andressa S. T. Gomes, Luiza A. Souza, Luciana H. Yamane, Renato R. Siman
Abstract:
The segregation of waste of electrical and electronic equipment (WEEE) in the generating source, its characterization (quali-quantitative) and identification of origin, besides being integral parts of classification reports, are crucial steps to the success of its integrated management. The aim of this paper was to count WEEE generation at the Federal University of Espírito Santo (UFES), Brazil, as well as to define sources, temporary storage sites, main transportations routes and destinations, the most generated WEEE and its recycling potential. Quantification of WEEE generated at the University in the years between 2010 and 2015 was performed using data analysis provided by UFES’s sector of assets management. EEE and WEEE flow in the campuses information were obtained through questionnaires applied to the University workers. It was recorded 6028 WEEEs units of data processing equipment disposed by the university between 2010 and 2015. Among these waste, the most generated were CRT screens, desktops, keyboards and printers. Furthermore, it was observed that these WEEEs are temporarily stored in inappropriate places at the University campuses. In general, these WEEE units are donated to NGOs of the city, or sold through auctions (2010 and 2013). As for recycling potential, from the primary processing and further sale of printed circuit boards (PCB) from the computers, the amount collected could reach U$ 27,839.23. The results highlight the importance of a WEEE management policy at the University.
Keywords: Solid waste, waste of electric and electronic equipment, waste management, institutional generation of solid waste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15682444 Optimization of Strategies and Models Review for Optimal Technologies - Based On Fuzzy Schemes for Green Architecture
Authors: Ghada Elshafei, Abdelazim Negm
Abstract:
Recently, the green architecture becomes a significant way to a sustainable future. Green building designs involve finding the balance between comfortable homebuilding and sustainable environment. Moreover, the utilization of the new technologies such as artificial intelligence techniques are used to complement current practices in creating greener structures to keep the built environment more sustainable. The most common objectives in green buildings should be designed to minimize the overall impact of the built environment that effect on ecosystems in general and in particularly human health and natural environment. This will lead to protecting occupant health, improving employee productivity, reducing pollution and sustaining the environmental. In green building design, multiple parameters which may be interrelated, contradicting, vague and of qualitative/quantitative nature are broaden to use. This paper presents a comprehensive critical state- ofart- review of current practices based on fuzzy and its combination techniques. Also, presented how green architecture/building can be improved using the technologies that been used for analysis to seek optimal green solutions strategies and models to assist in making the best possible decision out of different alternatives.
Keywords: Green architecture/building, technologies, optimization, strategies, fuzzy techniques and models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25232443 Estimation of the Parameters of Muskingum Methods for the Prediction of the Flood Depth in the Moudjar River Catchment
Authors: Fares Laouacheria, Said Kechida, Moncef Chabi
Abstract:
The objective of the study was based on the hydrological routing modelling for the continuous monitoring of the hydrological situation in the Moudjar river catchment, especially during floods with Hydrologic Engineering Center–Hydrologic Modelling Systems (HEC-HMS). The HEC-GeoHMS was used to transform data from geographic information system (GIS) to HEC-HMS for delineating and modelling the catchment river in order to estimate the runoff volume, which is used as inputs to the hydrological routing model. Two hydrological routing models were used, namely Muskingum and Muskingum routing models, for conducting this study. In this study, a comparison between the parameters of the Muskingum and Muskingum-Cunge routing models in HEC-HMS was used for modelling flood routing in the Moudjar river catchment and determining the relationship between these parameters and the physical characteristics of the river. The results indicate that the effects of input parameters such as the weighting factor "X" and travel time "K" on the output results are more significant, where the Muskingum routing model was more sensitive to input parameters than the Muskingum-Cunge routing model. This study can contribute to understand and improve the knowledge of the mechanisms of river floods, especially in ungauged river catchments.
Keywords: HEC-HMS, hydrological modelling, Muskingum routing model, Muskingum-Cunge routing model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11992442 Blind Identification of MA Models Using Cumulants
Authors: Mohamed Boulouird, Moha M'Rabet Hassani
Abstract:
In this paper, many techniques for blind identification of moving average (MA) process are presented. These methods utilize third- and fourth-order cumulants of the noisy observations of the system output. The system is driven by an independent and identically distributed (i.i.d) non-Gaussian sequence that is not observed. Two nonlinear optimization algorithms, namely the Gradient Descent and the Gauss-Newton algorithms are exposed. An algorithm based on the joint-diagonalization of the fourth-order cumulant matrices (FOSI) is also considered, as well as an improved version of the classical C(q, 0, k) algorithm based on the choice of the Best 1-D Slice of fourth-order cumulants. To illustrate the effectiveness of our methods, various simulation examples are presented.
Keywords: Cumulants, Identification, MA models, Parameter estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14092441 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data
Authors: Chen Chou, Feng-Tyan Lin
Abstract:
Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.
Keywords: Big Data, ITS, influence range, living area, central place theory, visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9762440 Using Genetic Programming to Evolve a Team of Data Classifiers
Authors: Gregor A. Morrison, Dominic P. Searson, Mark J. Willis
Abstract:
The purpose of this paper is to demonstrate the ability of a genetic programming (GP) algorithm to evolve a team of data classification models. The GP algorithm used in this work is “multigene" in nature, i.e. there are multiple tree structures (genes) that are used to represent team members. Each team member assigns a data sample to one of a fixed set of output classes. A majority vote, determined using the mode (highest occurrence) of classes predicted by the individual genes, is used to determine the final class prediction. The algorithm is tested on a binary classification problem. For the case study investigated, compact classification models are obtained with comparable accuracy to alternative approaches.Keywords: classification, genetic programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17862439 Classifying Students for E-Learning in Information Technology Course Using ANN
Authors: S. Areerachakul, N. Ployong, S. Na Songkla
Abstract:
This research’s objective is to select the model with most accurate value by using Neural Network Technique as a way to filter potential students who enroll in IT course by Electronic learning at Suan Suanadha Rajabhat University. It is designed to help students selecting the appropriate courses by themselves. The result showed that the most accurate model was 100 Folds Cross-validation which had 73.58% points of accuracy.
Keywords: Artificial neural network, classification, students.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14982438 Strategic Management via System Dynamics Simulation Models
Authors: G. Papageorgiou, A. Hadjis
Abstract:
This paper examines the problem of strategic management in highly turbulent dynamic business environmental conditions. As shown the high complexity of the problem can be managed with the use of System Dynamics Models and Computer Simulation in obtaining insights, and thorough understanding of the interdependencies between the organizational structure and the business environmental elements, so that effective product –market strategies can be designed. Simulation reveals the underlying forces that hold together the structure of an organizational system in relation to its environment. Such knowledge will contribute to the avoidance of fundamental planning errors and enable appropriate proactive well focused action.Keywords: Strategic Management, System Dynamics, Modelingand Simulation, Strategic Planning, Organizational Dynamics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26112437 Biomechanical Properties of Hen's Eggshell: Experimental Study and Numerical Modeling
Authors: A. Darvizeh, H. Rajabi, S. Fatahtooei Nejad, A. Khaheshi, P. Haghdoust
Abstract:
In this article, biomechanical aspects of hen-s eggshell as a natural ceramic structure are studied. The images, taken by a scanning electron microscope (SEM), are used to investigate the microscopic aspects of the egg. It is observed that eggshell has a three-layered microstructure with different morphological and structural characteristics. Studies on the eggshell membrane (ESM) as a prosperous tissue suggest that it is placed to prevent the penetration of microorganisms into the egg. Finally, numerical models of the egg are presented to study the stress distribution and its deformation under different loading conditions. The effects of two different types of loading (hydrostatic and point loadings) on two different shell models (with constant and variable thicknesses) are investigated in detail.
Keywords: Eggshell, biomechanical properties, Scanning electron microscope, Numerical Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24682436 Application of Adaptive Neuro-Fuzzy Inference System in Smoothing Transition Autoregressive Models
Authors: Ε. Giovanis
Abstract:
In this paper we propose and examine an Adaptive Neuro-Fuzzy Inference System (ANFIS) in Smoothing Transition Autoregressive (STAR) modeling. Because STAR models follow fuzzy logic approach, in the non-linear part fuzzy rules can be incorporated or other training or computational methods can be applied as the error backpropagation algorithm instead to nonlinear squares. Furthermore, additional fuzzy membership functions can be examined, beside the logistic and exponential, like the triangle, Gaussian and Generalized Bell functions among others. We examine two macroeconomic variables of US economy, the inflation rate and the 6-monthly treasury bills interest rates.Keywords: Forecasting, Neuro-Fuzzy, Smoothing transition, Time-series
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16312435 The Model Establishment and Analysis of TRACE/FRAPTRAN for Chinshan Nuclear Power Plant Spent Fuel Pool
Authors: J. R. Wang, H. T. Lin, Y. S. Tseng, W. Y. Li, H. C. Chen, S. W. Chen, C. Shih
Abstract:
TRACE is developed by U.S. NRC for the nuclear power plants (NPPs) safety analysis. We focus on the establishment and application of TRACE/FRAPTRAN/SNAP models for Chinshan NPP (BWR/4) spent fuel pool in this research. The geometry is 12.17 m × 7.87 m × 11.61 m for the spent fuel pool. In this study, there are three TRACE/SNAP models: one-channel, two-channel, and multi-channel TRACE/SNAP model. Additionally, the cooling system failure of the spent fuel pool was simulated and analyzed by using the above models. According to the analysis results, the peak cladding temperature response was more accurate in the multi-channel TRACE/SNAP model. The results depicted that the uncovered of the fuels occurred at 2.7 day after the cooling system failed. In order to estimate the detailed fuel rods performance, FRAPTRAN code was used in this research. According to the results of FRAPTRAN, the highest cladding temperature located on the node 21 of the fuel rod (the highest node at node 23) and the cladding burst roughly after 3.7 day.Keywords: TRACE, FRAPTRAN, SNAP, spent fuel pool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417