Search results for: computational error
983 Comparison of Three Turbulence Models in Wear Prediction of Multi-Size Particulate Flow through Rotating Channel
Authors: Pankaj K. Gupta, Krishnan V. Pagalthivarthi
Abstract:
The present work compares the performance of three turbulence modeling approach (based on the two-equation k -ε model) in predicting erosive wear in multi-size dense slurry flow through rotating channel. All three turbulence models include rotation modification to the production term in the turbulent kineticenergy equation. The two-phase flow field obtained numerically using Galerkin finite element methodology relates the local flow velocity and concentration to the wear rate via a suitable wear model. The wear models for both sliding wear and impact wear mechanisms account for the particle size dependence. Results of predicted wear rates using the three turbulence models are compared for a large number of cases spanning such operating parameters as rotation rate, solids concentration, flow rate, particle size distribution and so forth. The root-mean-square error between FE-generated data and the correlation between maximum wear rate and the operating parameters is found less than 2.5% for all the three models.Keywords: Rotating channel, maximum wear rate, multi-sizeparticulate flow, k −ε turbulence models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772982 An Overview of the Application of Fuzzy Inference System for the Automation of Breast Cancer Grading with Spectral Data
Authors: Shabbar Naqvi, Jonathan M. Garibaldi
Abstract:
Breast cancer is one of the most frequent occurring cancers in women throughout the world including U.K. The grading of this cancer plays a vital role in the prognosis of the disease. In this paper we present an overview of the use of advanced computational method of fuzzy inference system as a tool for the automation of breast cancer grading. A new spectral data set obtained from Fourier Transform Infrared Spectroscopy (FTIR) of cancer patients has been used for this study. The future work outlines the potential areas of fuzzy systems that can be used for the automation of breast cancer grading.
Keywords: Breast cancer, FTIR, fuzzy inference system, principal component analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2130981 Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building
Authors: A. Schuchter, M. Promegger
Abstract:
The outbreak of the COVID-19 pandemic has shown us that online education is so much more than just a cool feature for teachers – it is an essential part of modern teaching. In online math teaching, it is common to use tools to share screens, compute and calculate mathematical examples, while the students can watch the process. On the other hand, flipped classroom models are on the rise, with their focus on how students can gather knowledge by watching videos and on the teacher’s use of technological tools for information transfer. This paper proposes a co-educational teaching approach for coding and engineering subjects with the help of drone-building to spark interest in technology and create a platform for knowledge transfer. The project combines aspects from mathematics (matrices, vectors, shaders, trigonometry), physics (force, pressure and rotation) and coding (computational thinking, block-based programming, JavaScript and Python) and makes use of collaborative-shared 3D Modeling with clara.io, where students create mathematics knowhow. The instructor follows a problem-based learning approach and encourages their students to find solutions in their own time and in their own way, which will help them develop new skills intuitively and boost logically structured thinking. The collaborative aspect of working in groups will help the students develop communication skills as well as structural and computational thinking. Students are not just listeners as in traditional classroom settings, but play an active part in creating content together by compiling a Handbook of Knowledge (called “open book”) with examples and solutions. Before students start calculating, they have to write down all their ideas and working steps in full sentences so other students can easily follow their train of thought. Therefore, students will learn to formulate goals, solve problems, and create a ready-to use product with the help of “reverse engineering”, cross-referencing and creative thinking. The work on drones gives the students the opportunity to create a real-life application with a practical purpose, while going through all stages of product development.Keywords: Flipped classroom, co-creational education, coding, making, drones, co-education, ARCS-model, problem-based learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 503980 CFD Simulation of Non-Newtonian Fluid Flow in Arterial Stenoses with Surface Irregularities
Authors: R. Manimaran
Abstract:
CFD simulations are carried out in arterial stenoses with 48 % areal occlusion. Non-newtonian fluid model is selected for the blood flow as the same problem has been solved before with Newtonian fluid model. Studies on flow resistance with the presence of surface irregularities are carried out. Investigations are also performed on the pressure drop at various Reynolds numbers. The present study revealed that the pressure drop across a stenosed artery is practically unaffected by surface irregularities at low Reynolds numbers, while flow features are observed and discussed at higher Reynolds numbers.Keywords: Blood flow, Roughness, Computational fluid dynamics, Bio fluid mechanics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4510979 Design Method for Knowledge Base Systems in Education Using COKB-ONT
Authors: Nhon Do, Tuyen Trong Tran, Phan Hoai Truong
Abstract:
Nowadays e-Learning is more popular, in Vietnam especially. In e-learning, materials for studying are very important. It is necessary to design the knowledge base systems and expert systems which support for searching, querying, solving of problems. The ontology, which was called Computational Object Knowledge Base Ontology (COB-ONT), is a useful tool for designing knowledge base systems in practice. In this paper, a design method for knowledge base systems in education using COKB-ONT will be presented. We also present the design of a knowledge base system that supports studying knowledge and solving problems in higher mathematics.Keywords: artificial intelligence, knowledge base systems, ontology, educational software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2042978 Sectoral Energy Consumption in South Africa and Its Implication for Economic Growth
Authors: Kehinde Damilola Ilesanmi, Dev Datt Tewari
Abstract:
South Africa is in its post-industrial era moving from the primary and secondary sector to the tertiary sector. The study investigated the impact of the disaggregated energy consumption (coal, oil, and electricity) on the primary, secondary and tertiary sectors of the economy between 1980 and 2012 in South Africa. Using vector error correction model, it was established that South Africa is an energy dependent economy, and that energy (especially electricity and oil) is a limiting factor of growth. This implies that implementation of energy conservation policies may hamper economic growth. Output growth is significantly outpacing energy supply, which has necessitated load shedding. To meet up the excess energy demand, there is a need to increase the generating capacity which will necessitate increased investment in the electricity sector as well as strategic steps to increase oil production. There is also need to explore more renewable energy sources, in order to meet the growing energy demand without compromising growth and environmental sustainability. Policy makers should also pursue energy efficiency policies especially at sectoral level of the economy.Keywords: Causality, economic growth, energy consumption, hypothesis, sectoral output.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655977 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques
Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah
Abstract:
Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or underestimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improve accuracies. This requires standard measurement methods to be structured in ontological and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.
Keywords: BIM, Construction projects, Cost estimation, NRM, Ontology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4444976 A Hybrid Classification Method using Artificial Neural Network Based Decision Tree for Automatic Sleep Scoring
Authors: Haoyu Ma, Bin Hu, Mike Jackson, Jingzhi Yan, Wen Zhao
Abstract:
In this paper we propose a new classification method for automatic sleep scoring using an artificial neural network based decision tree. It attempts to treat sleep scoring progress as a series of two-class problems and solves them with a decision tree made up of a group of neural network classifiers, each of which uses a special feature set and is aimed at only one specific sleep stage in order to maximize the classification effect. A single electroencephalogram (EEG) signal is used for our analysis rather than depending on multiple biological signals, which makes greatly simplifies the data acquisition process. Experimental results demonstrate that the average epoch by epoch agreement between the visual and the proposed method in separating 30s wakefulness+S1, REM, S2 and SWS epochs was 88.83%. This study shows that the proposed method performed well in all the four stages, and can effectively limit error propagation at the same time. It could, therefore, be an efficient method for automatic sleep scoring. Additionally, since it requires only a small volume of data it could be suited to pervasive applications.
Keywords: Sleep, Sleep stage, Automatic sleep scoring, Electroencephalography, Decision tree, Artificial neural network
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2071975 Pattern Classification of Back-Propagation Algorithm Using Exclusive Connecting Network
Authors: Insung Jung, Gi-Nam Wang
Abstract:
The objective of this paper is to a design of pattern classification model based on the back-propagation (BP) algorithm for decision support system. Standard BP model has done full connection of each node in the layers from input to output layers. Therefore, it takes a lot of computing time and iteration computing for good performance and less accepted error rate when we are doing some pattern generation or training the network. However, this model is using exclusive connection in between hidden layer nodes and output nodes. The advantage of this model is less number of iteration and better performance compare with standard back-propagation model. We simulated some cases of classification data and different setting of network factors (e.g. hidden layer number and nodes, number of classification and iteration). During our simulation, we found that most of simulations cases were satisfied by BP based using exclusive connection network model compared to standard BP. We expect that this algorithm can be available to identification of user face, analysis of data, mapping data in between environment data and information.Keywords: Neural network, Back-propagation, classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656974 Efficient Variants of Square Contour Algorithm for Blind Equalization of QAM Signals
Authors: Ahmad Tariq Sheikh, Shahzad Amin Sheikh
Abstract:
A new distance-adjusted approach is proposed in which static square contours are defined around an estimated symbol in a QAM constellation, which create regions that correspond to fixed step sizes and weighting factors. As a result, the equalizer tap adjustment consists of a linearly weighted sum of adaptation criteria that is scaled by a variable step size. This approach is the basis of two new algorithms: the Variable step size Square Contour Algorithm (VSCA) and the Variable step size Square Contour Decision-Directed Algorithm (VSDA). The proposed schemes are compared with existing blind equalization algorithms in the SCA family in terms of convergence speed, constellation eye opening and residual ISI suppression. Simulation results for 64-QAM signaling over empirically derived microwave radio channels confirm the efficacy of the proposed algorithms. An RTL implementation of the blind adaptive equalizer based on the proposed schemes is presented and the system is configured to operate in VSCA error signal mode, for square QAM signals up to 64-QAM.Keywords: Adaptive filtering, Blind Equalization, Square Contour Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853973 Application of PSO Technique for Seismic Control of Tall Building
Authors: A. Shayeghi, H. Shayeghi, H. Eimani Kalasar
Abstract:
In recent years, tuned mass damper (TMD) control systems for civil engineering structures have attracted considerable attention. This paper emphasizes on the application of particle swarm application (PSO) to design and optimize the parameters of the TMD control scheme for achieving the best results in the reduction of the building response under earthquake excitations. The Integral of the Time multiplied Absolute value of the Error (ITAE) based on relative displacement of all floors in the building is taken as a performance index of the optimization criterion. The problem of robustly TMD controller design is formatted as an optimization problem based on the ITAE performance index to be solved using the PSO technique which has a story ability to find the most optimistic results. An 11- story realistic building, located in the city of Rasht, Iran is considered as a test system to demonstrate effectiveness of the proposed method. The results analysis through the time-domain simulation and some performance indices reveals that the designed PSO based TMD controller has an excellent capability in reduction of the seismically excited example building.
Keywords: TMD, Particle Swarm Optimization, Tall Buildings, Structural Dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819972 Geometric Modeling of Illumination on the TFT-LCD Panel using Bezier Surface
Authors: Kyong-min Lee, Moon Soo Chang, PooGyeon Park
Abstract:
In this paper, we propose a geometric modeling of illumination on the patterned image containing etching transistor. This image is captured by a commercial camera during the inspection of a TFT-LCD panel. Inspection of defect is an important process in the production of LCD panel, but the regional difference in brightness, which has a negative effect on the inspection, is due to the uneven illumination environment. In order to solve this problem, we present a geometric modeling of illumination consisting of an interpolation using the least squares method and 3D modeling using bezier surface. Our computational time, by using the sampling method, is shorter than the previous methods. Moreover, it can be further used to correct brightness in every patterned image.Keywords: Bezier, defect, geometric modeling, illumination, inspection, LCD, panel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855971 Neural Network Learning Based on Chaos
Authors: Truong Quang Dang Khoa, Masahiro Nakagawa
Abstract:
Chaos and fractals are novel fields of physics and mathematics showing up a new way of universe viewpoint and creating many ideas to solve several present problems. In this paper, a novel algorithm based on the chaotic sequence generator with the highest ability to adapt and reach the global optima is proposed. The adaptive ability of proposal algorithm is flexible in 2 steps. The first one is a breadth-first search and the second one is a depth-first search. The proposal algorithm is examined by 2 functions, the Camel function and the Schaffer function. Furthermore, the proposal algorithm is applied to optimize training Multilayer Neural Networks.
Keywords: learning and evolutionary computing, Chaos Optimization Algorithm, Artificial Neural Networks, nonlinear optimization, intelligent computational technologies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780970 Effect Comparison of Speckle Noise Reduction Filters on 2D-Echocardigraphic Images
Authors: Faten A. Dawood, Rahmita W. Rahmat, Suhaini B. Kadiman, Lili N. Abdullah, Mohd D. Zamrin
Abstract:
Echocardiography imaging is one of the most common diagnostic tests that are widely used for assessing the abnormalities of the regional heart ventricle function. The main goal of the image enhancement task in 2D-echocardiography (2DE) is to solve two major anatomical structure problems; speckle noise and low quality. Therefore, speckle noise reduction is one of the important steps that used as a pre-processing to reduce the distortion effects in 2DE image segmentation. In this paper, we present the common filters that based on some form of low-pass spatial smoothing filters such as Mean, Gaussian, and Median. The Laplacian filter was used as a high-pass sharpening filter. A comparative analysis was presented to test the effectiveness of these filters after being applied to original 2DE images of 4-chamber and 2-chamber views. Three statistical quantity measures: root mean square error (RMSE), peak signal-to-ratio (PSNR) and signal-tonoise ratio (SNR) are used to evaluate the filter performance quantitatively on the output enhanced image.
Keywords: Gaussian operator, median filter, speckle texture, peak signal-to-ratio
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1995969 Statistical Computational of Volatility in Financial Time Series Data
Authors: S. Al Wadi, Mohd Tahir Ismail, Samsul Ariffin Abdul Karim
Abstract:
It is well known that during the developments in the economic sector and through the financial crises occur everywhere in the whole world, volatility measurement is the most important concept in financial time series. Therefore in this paper we discuss the volatility for Amman stocks market (Jordan) for certain period of time. Since wavelet transform is one of the most famous filtering methods and grows up very quickly in the last decade, we compare this method with the traditional technique, Fast Fourier transform to decide the best method for analyzing the volatility. The comparison will be done on some of the statistical properties by using Matlab program.Keywords: Fast Fourier transforms, Haar wavelet transform, Matlab (Wavelet tools), stocks market, Volatility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2317968 Analysis of Diverse Clustering Tools in Data Mining
Authors: S. Sarumathi, N. Shanthi, M. Sharmila
Abstract:
Clustering in data mining is an unsupervised learning technique of aggregating the data objects into meaningful groups such that the intra cluster similarity of objects are maximized and inter cluster similarity of objects are minimized. Over the past decades several clustering tools were emerged in which clustering algorithms are inbuilt and are easier to use and extract the expected results. Data mining mainly deals with the huge databases that inflicts on cluster analysis and additional rigorous computational constraints. These challenges pave the way for the emergence of powerful expansive data mining clustering softwares. In this survey, a variety of clustering tools used in data mining are elucidated along with the pros and cons of each software.
Keywords: Cluster Analysis, Clustering Algorithms, Clustering Techniques, Association, Visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2201967 On-Road Text Detection Platform for Driver Assistance Systems
Authors: Guezouli Larbi, Belkacem Soundes
Abstract:
The automation of the text detection process can help the human in his driving task. Its application can be very useful to help drivers to have more information about their environment by facilitating the reading of road signs such as directional signs, events, stores, etc. In this paper, a system consisting of two stages has been proposed. In the first one, we used pseudo-Zernike moments to pinpoint areas of the image that may contain text. The architecture of this part is based on three main steps, region of interest (ROI) detection, text localization, and non-text region filtering. Then, in the second step, we present a convolutional neural network architecture (On-Road Text Detection Network - ORTDN) which is considered as a classification phase. The results show that the proposed framework achieved ≈ 35 fps and an mAP of ≈ 90%, thus a low computational time with competitive accuracy.
Keywords: Text detection, CNN, PZM, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163966 PAPR Reduction Method for OFDM Signalby Using Dummy Sub-carriers
Authors: Pisit Boonsrimuang, Arjin Numsomran, Tawil Paungma, Hideo Kobayashi
Abstract:
One of the disadvantages of using OFDM is the larger peak to averaged power ratio (PAPR) in its time domain signal. The larger PAPR signal would course the fatal degradation of bit error rate performance (BER) due to the inter-modulation noise in the nonlinear channel. This paper proposes an improved DSI (Dummy Sequence Insertion) method, which can achieve the better PAPR and BER performances. The feature of proposed method is to optimize the phase of each dummy sub-carrier so as to reduce the PAPR performance by changing all predetermined phase coefficients in the time domain signal, which is calculated for data sub-carriers and dummy sub-carriers separately. To achieve the better PAPR performance, this paper also proposes to employ the time-frequency domain swapping algorithm for fine adjustment of phase coefficient of the dummy subcarriers, which can achieve the less complexity of processing and achieves the better PAPR and BER performances than those for the conventional DSI method. This paper presents various computer simulation results to verify the effectiveness of proposed method as comparing with the conventional methods in the non-linear channel.Keywords: OFDM, PAPR, dummy sub-carriers, non-linear
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545965 Optimizing Turning Parameters for Cylindrical Parts Using Simulated Annealing Method
Authors: Farhad Kolahan, Mahdi Abachizadeh
Abstract:
In this paper, a simulated annealing algorithm has been developed to optimize machining parameters in turning operation on cylindrical workpieces. The turning operation usually includes several passes of rough machining and a final pass of finishing. Seven different constraints are considered in a non-linear model where the goal is to achieve minimum total cost. The weighted total cost consists of machining cost, tool cost and tool replacement cost. The computational results clearly show that the proposed optimization procedure has considerably improved total operation cost by optimally determining machining parameters.
Keywords: Optimization, Simulated Annealing, Machining Parameters, Turning Operation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821964 A Genetic Algorithm to Schedule the Flow Shop Problem under Preventive Maintenance Activities
Authors: J. Kaabi, Y. Harrath
Abstract:
This paper studied the flow shop scheduling problem under machine availability constraints. The machines are subject to flexible preventive maintenance activities. The nonresumable scenario for the jobs was considered. That is, when a job is interrupted by an unavailability period of a machine it should be restarted from the beginning. The objective is to minimize the total tardiness time for the jobs and the advance/tardiness for the maintenance activities. To solve the problem, a genetic algorithm was developed and successfully tested and validated on many problem instances. The computational results showed that the new genetic algorithm outperforms another earlier proposed algorithm.
Keywords: Flow shop scheduling, maintenance, genetic algorithm, priority rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1922963 Re-Handling Operations in Small Container Terminal Operated by Reach Stackers
Authors: Adam Galuszka, Krzysztof Skrzypczyk, Damian Bereska, Marcin Pacholczyk
Abstract:
In this paper an average number of re-handlings analysis is proposed to solve the problem of finding bays configuration in small container terminal in Gliwice, Poland. Rehandlings in this terminal can be performed only by reachstackers. The goal of the heuristic is to plan the reachstacter moves in the terminal, assuming that the target containers are reached and the number of re-handings is minimized. The real situation requires also to take into account the model of the problem environment uncertainty caused by the fact that many containers are not delivered to the terminal on time, or can not be sent on scheduled time. To enable this, the heuristic uses some assumptions to simplify problem analysis.Keywords: Container Terminal, Re-handling operations, Computational efficiency, WiMax.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2631962 CAD/CAM Algorithms for 3D Woven Multilayer Textile Structures
Authors: Martin A. Smith, Xiaogang Chen
Abstract:
This paper proposes new algorithms for the computeraided design and manufacture (CAD/CAM) of 3D woven multi-layer textile structures. Existing commercial CAD/CAM systems are often restricted to the design and manufacture of 2D weaves. Those CAD/CAM systems that do support the design and manufacture of 3D multi-layer weaves are often limited to manual editing of design paper grids on the computer display and weave retrieval from stored archives. This complex design activity is time-consuming, tedious and error-prone and requires considerable experience and skill of a technical weaver. Recent research reported in the literature has addressed some of the shortcomings of commercial 3D multi-layer weave CAD/CAM systems. However, earlier research results have shown the need for further work on weave specification, weave generation, yarn path editing and layer binding. Analysis of 3D multi-layer weaves in this research has led to the design and development of efficient and robust algorithms for the CAD/CAM of 3D woven multi-layer textile structures. The resulting algorithmically generated weave designs can be used as a basis for lifting plans that can be loaded onto looms equipped with electronic shedding mechanisms for the CAM of 3D woven multi-layer textile structures.Keywords: CAD/CAM, Multi-layer, Textile, Weave.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2571961 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method
Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent
Abstract:
A method of modelling topography used in the simulation of riverbeds is proposed in this paper which removes the need for datapoints and measurements of a physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method, and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.
Keywords: Bed topography, FBM, LBM, shallow water, simulations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 305960 Microscopic Emission and Fuel Consumption Modeling for Light-duty Vehicles Using Portable Emission Measurement System Data
Authors: Wei Lei, Hui Chen, Lin Lu
Abstract:
Microscopic emission and fuel consumption models have been widely recognized as an effective method to quantify real traffic emission and energy consumption when they are applied with microscopic traffic simulation models. This paper presents a framework for developing the Microscopic Emission (HC, CO, NOx, and CO2) and Fuel consumption (MEF) models for light-duty vehicles. The variable of composite acceleration is introduced into the MEF model with the purpose of capturing the effects of historical accelerations interacting with current speed on emission and fuel consumption. The MEF model is calibrated by multivariate least-squares method for two types of light-duty vehicle using on-board data collected in Beijing, China by a Portable Emission Measurement System (PEMS). The instantaneous validation results shows the MEF model performs better with lower Mean Absolute Percentage Error (MAPE) compared to other two models. Moreover, the aggregate validation results tells the MEF model produces reasonable estimations compared to actual measurements with prediction errors within 12%, 10%, 19%, and 9% for HC, CO, NOx emissions and fuel consumption, respectively.Keywords: Emission, Fuel consumption, Light-duty vehicle, Microscopic, Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004959 Modeling and Analysis of the Effects of Nephrolithiasis in Kidney Using a Computational Tactile Sensing Approach
Authors: Elnaz Afshari, Siamak Najarian
Abstract:
Having considered tactile sensing and palpation of a surgeon in order to detect kidney stone during open surgery; we present the 2D model of nephrolithiasis (two dimensional model of kidney containing a simulated stone). The effects of stone existence that appear on the surface of kidney (because of exerting mechanical load) are determined. Using Finite element method, it is illustrated that the created stress patterns on the surface of kidney and stress graphs not only show existence of stone inside kidney, but also show its exact location.Keywords: Nephrolithiasis, Minimally Invasive Surgery, Artificial Tactile Sensing, Finite Element Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1362958 Estimation of Real Power Transfer Allocation Using Intelligent Systems
Authors: H. Shareef, A. Mohamed, S. A. Khalid, Aziah Khamis
Abstract:
This paper presents application artificial intelligent (AI) techniques, namely artificial neural network (ANN), adaptive neuro fuzzy interface system (ANFIS), to estimate the real power transfer between generators and loads. Since these AI techniques adopt supervised learning, it first uses modified nodal equation method (MNE) to determine real power contribution from each generator to loads. Then the results of MNE method and load flow information are utilized to estimate the power transfer using AI techniques. The 25-bus equivalent system of south Malaysia is utilized as a test system to illustrate the effectiveness of both AI methods compared to that of the MNE method. The mean squared error of the estimate of ANN and ANFIS power transfer allocation methods are 1.19E-05 and 2.97E-05, respectively. Furthermore, when compared to MNE method, ANN and ANFIS methods computes generator contribution to loads within 20.99 and 39.37msec respectively whereas the MNE method took 360msec for the calculation of same real power transfer allocation.
Keywords: Artificial intelligence, Power tracing, Artificial neural network, ANFIS, Power system deregulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2583957 On the Variability of Tool Wear and Life at Disparate Operating Parameters
Authors: S. E. Oraby, A.M. Alaskari
Abstract:
The stochastic nature of tool life using conventional discrete-wear data from experimental tests usually exists due to many individual and interacting parameters. It is a common practice in batch production to continually use the same tool to machine different parts, using disparate machining parameters. In such an environment, the optimal points at which tools have to be changed, while achieving minimum production cost and maximum production rate within the surface roughness specifications, have not been adequately studied. In the current study, two relevant aspects are investigated using coated and uncoated inserts in turning operations: (i) the accuracy of using machinability information, from fixed parameters testing procedures, when variable parameters situations are emerged, and (ii) the credibility of tool life machinability data from prior discrete testing procedures in a non-stop machining. A novel technique is proposed and verified to normalize the conventional fixed parameters machinability data to suit the cases when parameters have to be changed for the same tool. Also, an experimental investigation has been established to evaluate the error in the tool life assessment when machinability from discrete testing procedures is employed in uninterrupted practical machining.
Keywords: Machinability, tool life, tool wear, wear variability
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1797956 River Flow Prediction Using Nonlinear Prediction Method
Authors: N. H. Adenan, M. S. M. Noorani
Abstract:
River flow prediction is an essential to ensure proper management of water resources can be optimally distribute water to consumers. This study presents an analysis and prediction by using nonlinear prediction method involving monthly river flow data in Tanjung Tualang from 1976 to 2006. Nonlinear prediction method involves the reconstruction of phase space and local linear approximation approach. The phase space reconstruction involves the reconstruction of one-dimensional (the observed 287 months of data) in a multidimensional phase space to reveal the dynamics of the system. Revenue of phase space reconstruction is used to predict the next 72 months. A comparison of prediction performance based on correlation coefficient (CC) and root mean square error (RMSE) have been employed to compare prediction performance for nonlinear prediction method, ARIMA and SVM. Prediction performance comparisons show the prediction results using nonlinear prediction method is better than ARIMA and SVM. Therefore, the result of this study could be used to develop an efficient water management system to optimize the allocation water resources.
Keywords: River flow, nonlinear prediction method, phase space, local linear approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2365955 Global Kinetics of Direct Dimethyl Ether Synthesis Process from Syngas in Slurry Reactor over a Novel Cu-Zn-Al-Zr Slurry Catalyst
Authors: Zhen Chen, Haitao Zhang, Weiyong Ying, Dingye Fang
Abstract:
The direct synthesis process of dimethyl ether (DME) from syngas in slurry reactors is considered to be promising because of its advantages in caloric transfer. In this paper, the influences of operating conditions (temperature, pressure and weight hourly space velocity) on the conversion of CO, selectivity of DME and methanol were studied in a stirred autoclave over Cu-Zn-Al-Zr slurry catalyst, which is far more suitable to liquid phase dimethyl ether synthesis process than bifunctional catalyst commercially. A Langmuir- Hinshelwood mechanism type global kinetics model for liquid phase DME direct synthesis based on methanol synthesis models and a methanol dehydration model has been investigated by fitting our experimental data. The model parameters were estimated with MATLAB program based on general Genetic Algorithms and Levenberg-Marquardt method, which is suitably fitting experimental data and its reliability was verified by statistical test and residual error analysis.Keywords: alcohol/ether fuel, Cu-Zn-Al-Zr slurry catalyst, global kinetics, slurry reactor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5521954 OCR for Script Identification of Hindi (Devnagari) Numerals using Error Diffusion Halftoning Algorithm with Neural Classifier
Authors: Banashree N. P., Andhe Dharani, R. Vasanta, P. S. Satyanarayana
Abstract:
The applications on numbers are across-the-board that there is much scope for study. The chic of writing numbers is diverse and comes in a variety of form, size and fonts. Identification of Indian languages scripts is challenging problems. In Optical Character Recognition [OCR], machine printed or handwritten characters/numerals are recognized. There are plentiful approaches that deal with problem of detection of numerals/character depending on the sort of feature extracted and different way of extracting them. This paper proposes a recognition scheme for handwritten Hindi (devnagiri) numerals; most admired one in Indian subcontinent our work focused on a technique in feature extraction i.e. Local-based approach, a method using 16-segment display concept, which is extracted from halftoned images & Binary images of isolated numerals. These feature vectors are fed to neural classifier model that has been trained to recognize a Hindi numeral. The archetype of system has been tested on varieties of image of numerals. Experimentation result shows that recognition rate of halftoned images is 98 % compared to binary images (95%).
Keywords: OCR, Halftoning, Neural classifier, 16-segmentdisplay concept.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716