Search results for: panel data regression models
7789 Speed Characteristics of Mixed Traffic Flow on Urban Arterials
Authors: Ashish Dhamaniya, Satish Chandra
Abstract:
Speed and traffic volume data are collected on different sections of four lane and six lane roads in three metropolitan cities in India. Speed data are analyzed to fit the statistical distribution to individual vehicle speed data and all vehicles speed data. It is noted that speed data of individual vehicle generally follows a normal distribution but speed data of all vehicle combined at a section of urban road may or may not follow the normal distribution depending upon the composition of traffic stream. A new term Speed Spread Ratio (SSR) is introduced in this paper which is the ratio of difference in 85th and 50th percentile speed to the difference in 50th and 15th percentile speed. If SSR is unity then speed data are truly normally distributed. It is noted that on six lane urban roads, speed data follow a normal distribution only when SSR is in the range of 0.86 – 1.11. The range of SSR is validated on four lane roads also.
Keywords: Normal distribution, percentile speed, speed spread ratio, traffic volume.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42467788 A Quantitative Model for Determining the Area of the “Core and Structural System Elements” of Tall Office Buildings
Authors: Görkem Arslan Kılınç
Abstract:
Due to the high construction, operation, and maintenance costs of tall buildings, quantification of the area in the plan layout which provides a financial return is an important design criterion. The area of the “core and the structural system elements” does not provide financial return but must exist in the plan layout. Some characteristic items of tall office buildings affect the size of these areas. From this point of view, 15 tall office buildings were systematically investigated. The typical office floor plans of these buildings were re-produced digitally. The area of the “core and the structural system elements” in each building and the characteristic items of each building were calculated. These characteristic items are the size of the long and short plan edge, plan length/width ratio, size of the core long and short edge, core length/width ratio, core area, slenderness, building height, number of floors, and floor height. These items were analyzed by correlation and regression analyses. Results of this paper put forward that; characteristic items which affect the area of "core and structural system elements" are plan long and short edge size, core short edge size, building height, and the number of floors. A one-unit increase in plan short side size increases the area of the "core and structural system elements" in the plan by 12,378 m2. An increase in core short edge size increases the area of the core and structural system elements in the plan by 25,650 m2. Subsequent studies can be conducted by expanding the sample of the study and considering the geographical location of the building.
Keywords: Core area, correlation analysis, floor area, regression analysis, space efficiency, tall office buildings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5077787 A Comparative Study between Discrete Wavelet Transform and Maximal Overlap Discrete Wavelet Transform for Testing Stationarity
Authors: Amel Abdoullah Ahmed Dghais, Mohd Tahir Ismail
Abstract:
In this paper the core objective is to apply discrete wavelet transform and maximal overlap discrete wavelet transform functions namely Haar, Daubechies2, Symmlet4, Coiflet2 and discrete approximation of the Meyer wavelets in non stationary financial time series data from Dow Jones index (DJIA30) of US stock market. The data consists of 2048 daily data of closing index from December 17, 2004 to October 23, 2012. Unit root test affirms that the data is non stationary in the level. A comparison between the results to transform non stationary data to stationary data using aforesaid transforms is given which clearly shows that the decomposition stock market index by discrete wavelet transform is better than maximal overlap discrete wavelet transform for original data.
Keywords: Discrete wavelet transform, maximal overlap discrete wavelet transform, stationarity, autocorrelation function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47287786 Comparative Study of Transformed and Concealed Data in Experimental Designs and Analyses
Authors: K. Chinda, P. Luangpaiboon
Abstract:
This paper presents the comparative study of coded data methods for finding the benefit of concealing the natural data which is the mercantile secret. Influential parameters of the number of replicates (rep), treatment effects (τ) and standard deviation (σ) against the efficiency of each transformation method are investigated. The experimental data are generated via computer simulations under the specified condition of the process with the completely randomized design (CRD). Three ways of data transformation consist of Box-Cox, arcsine and logit methods. The difference values of F statistic between coded data and natural data (Fc-Fn) and hypothesis testing results were determined. The experimental results indicate that the Box-Cox results are significantly different from natural data in cases of smaller levels of replicates and seem to be improper when the parameter of minus lambda has been assigned. On the other hand, arcsine and logit transformations are more robust and obviously, provide more precise numerical results. In addition, the alternate ways to select the lambda in the power transformation are also offered to achieve much more appropriate outcomes.Keywords: Experimental Designs, Box-Cox, Arcsine, Logit Transformations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16227785 Energy Communities from Municipality Level to Province Level: A Comparison Using Autoregressive Integrated Moving Average Model
Authors: Amro Issam Hamed Attia Ramadan, Marco Zappatore, Pasquale Balena, Antonella Longo
Abstract:
Considering the energy crisis that is hitting Europe, it becomes increasingly necessary to change energy policies to depend less on fossil fuels and replace them with energy from renewable sources. This has triggered the urge to use clean energy, not only to satisfy energy needs and fulfill the required consumption, but also to decrease the danger of climatic changes due to harmful emissions. Many countries have already started creating energy communities based on renewable energy sources. The first step to understanding energy needs in any place is to perfectly know the consumption. In this work, we aim to estimate electricity consumption for a municipality that makes up part of a rural area located in southern Italy using forecast models that allow for the estimation of electricity consumption for the next 10 years, and we then apply the same model to the province where the municipality is located and estimate the future consumption for the same period to examine whether it is possible to start from the municipality level to reach the province level when creating energy communities.
Keywords: ARIMA, electricity consumption, forecasting models, time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2837784 Relative Mapping Errors of Linear Time Invariant Systems Caused By Particle Swarm Optimized Reduced Order Model
Authors: G. Parmar, S. Mukherjee, R. Prasad
Abstract:
The authors present an optimization algorithm for order reduction and its application for the determination of the relative mapping errors of linear time invariant dynamic systems by the simplified models. These relative mapping errors are expressed by means of the relative integral square error criterion, which are determined for both unit step and impulse inputs. The reduction algorithm is based on minimization of the integral square error by particle swarm optimization technique pertaining to a unit step input. The algorithm is simple and computer oriented. It is shown that the algorithm has several advantages, e.g. the reduced order models retain the steady-state value and stability of the original system. Two numerical examples are solved to illustrate the superiority of the algorithm over some existing methods.Keywords: Order reduction, Particle swarm optimization, Relative mapping error, Stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15747783 Drivers of Land Degradation in Trays Ecosystem as Modulated under a Changing Climate: Case Study of Côte d'Ivoire
Authors: Kadio Valere R. Angaman, Birahim Bouna Niang
Abstract:
Land degradation is a serious problem in developing countries including Cote d’Ivoire, which has its economy focused on agriculture. It occurs in all kinds of ecosystems over the world. However, the drivers of land degradation vary from one region to another, and from one ecosystem to another. Thus, identifying these drivers is an essential prerequisite to develop and implement appropriate policies to reverse the trend of land degradation in the country, especially in the trays ecosystem. Using the binary logistic model with primary data obtained through 780 farmers surveyed, we analyze and identify the drivers of land degradation in the trays ecosystem. The descriptive statistics show that 52% of farmers interviewed have stated facing land degradation in their farmland. This high rate shows the extent of land degradation in this ecosystem. Also, the results obtained from the binary logit regression reveal that land degradation is significantly influenced by a set of variables such as sex, education, slope, erosion, pesticide, agricultural activity, deforestation, and temperature. The drivers identified are mostly local, as a result, the government must implement some policies and strategies that facilitate and incentive the adoption of sustainable land management practices by farmers to reverse the negative trend of land degradation.
Keywords: Drivers, land degradation, trays ecosystem, sustainable land management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4197782 Applying Theory of Perceived Risk and Technology Acceptance Model in the Online Shopping Channel
Authors: Yong-Hui Li, Jing-Wen Huang
Abstract:
As the advancement of technology, online shopping channel develops rapidly in recent years. According to the report of Taiwan Network Information Center, there are almost eighty percents of internet population shopping in online channel. Synthesizing insights from the previous research, this study develops the conceptual model to integrate Theory of Perceived Risk (TPR) and Technology Acceptance Model (TAM) to apply in online shopping. Using data collected from 637 respondents from online survey website, we use structural equation modeling to test measurement and structural models. The results suggest the need for consideration of perceived risk as an antecedent in the Technology Acceptance Model. The limitations and implications are discussed.
Keywords: perceived risk, perceived usefulness, perceived ease of use, behavioral intention, actual purchase behavior
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64397781 Design of a Low Cost Motion Data Acquisition Setup for Mechatronic Systems
Authors: Barış Can Yalçın
Abstract:
Motion sensors have been commonly used as a valuable component in mechatronic systems, however, many mechatronic designs and applications that need motion sensors cost enormous amount of money, especially high-tech systems. Design of a software for communication protocol between data acquisition card and motion sensor is another issue that has to be solved. This study presents how to design a low cost motion data acquisition setup consisting of MPU 6050 motion sensor (gyro and accelerometer in 3 axes) and Arduino Mega2560 microcontroller. Design parameters are calibration of the sensor, identification and communication between sensor and data acquisition card, interpretation of data collected by the sensor.
Keywords: Calibration of sensors, data acquisition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43367780 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Geryes Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g. Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple-views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.
Keywords: Smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2587779 Real Time Approach for Data Placement in Wireless Sensor Networks
Authors: Sanjeev Gupta, Mayank Dave
Abstract:
The issue of real-time and reliable report delivery is extremely important for taking effective decision in a real world mission critical Wireless Sensor Network (WSN) based application. The sensor data behaves differently in many ways from the data in traditional databases. WSNs need a mechanism to register, process queries, and disseminate data. In this paper we propose an architectural framework for data placement and management. We propose a reliable and real time approach for data placement and achieving data integrity using self organized sensor clusters. Instead of storing information in individual cluster heads as suggested in some protocols, in our architecture we suggest storing of information of all clusters within a cell in the corresponding base station. For data dissemination and action in the wireless sensor network we propose to use Action and Relay Stations (ARS). To reduce average energy dissipation of sensor nodes, the data is sent to the nearest ARS rather than base station. We have designed our architecture in such a way so as to achieve greater energy savings, enhanced availability and reliability.
Keywords: Cluster head, data reliability, real time communication, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18147778 Data Mining in Medicine Domain Using Decision Trees and Vector Support Machine
Authors: Djamila Benhaddouche, Abdelkader Benyettou
Abstract:
In this paper, we used data mining to extract biomedical knowledge. In general, complex biomedical data collected in studies of populations are treated by statistical methods, although they are robust, they are not sufficient in themselves to harness the potential wealth of data. For that you used in step two learning algorithms: the Decision Trees and Support Vector Machine (SVM). These supervised classification methods are used to make the diagnosis of thyroid disease. In this context, we propose to promote the study and use of symbolic data mining techniques.
Keywords: A classifier, Algorithms decision tree, knowledge extraction, Support Vector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18707777 CFD Analysis of Incompressible Turbulent Swirling Flow through Circle Grids Space Filling Plate
Authors: B. Manshoor, M. Jaat, Amir Khalid
Abstract:
Circle grid space filling plate is a flow conditioner with a fractal pattern and used to eliminate turbulence originating from pipe fittings in experimental fluid flow applications. In this paper, steady state, incompressible, swirling turbulent flow through circle grid space filling plate has been studied. The solution and the analysis were carried out using finite volume CFD solver FLUENT 6.2. Three turbulence models were used in the numerical investigation and their results were compared with the pressure drop correlation of BS EN ISO 5167-2:2003. The turbulence models investigated here are the standard k-ε, realizable k-ε, and the Reynolds Stress Model (RSM). The results showed that the RSM model gave the best agreement with the ISO pressure drop correlation. The effects of circle grids space filling plate thickness and Reynolds number on the flow characteristics have been investigated as well.
Keywords: Flow conditioning, turbulent flow, turbulent modeling, CFD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20777776 Reduced Order Modelling of Linear Dynamic Systems using Particle Swarm Optimized Eigen Spectrum Analysis
Authors: G. Parmar, S. Mukherjee, R. Prasad
Abstract:
The authors present an algorithm for order reduction of linear time invariant dynamic systems using the combined advantages of the eigen spectrum analysis and the error minimization by particle swarm optimization technique. Pole centroid and system stiffness of both original and reduced order systems remain same in this method to determine the poles, whereas zeros are synthesized by minimizing the integral square error in between the transient responses of original and reduced order models using particle swarm optimization technique, pertaining to a unit step input. It is shown that the algorithm has several advantages, e.g. the reduced order models retain the steady-state value and stability of the original system. The algorithm is illustrated with the help of two numerical examples and the results are compared with the other existing techniques.Keywords: Eigen spectrum, Integral square error, Orderreduction, Particle swarm optimization, Stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16627775 A Software Framework for Predicting Oil-Palm Yield from Climate Data
Authors: Mohd. Noor Md. Sap, A. Majid Awan
Abstract:
Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19797774 Effect of Elevation and Wind Direction on Silicon Solar Panel Efficiency
Authors: Abdulrahman M. Homadi
Abstract:
As a great source of renewable energy, solar energy is considered to be one of the most important in the world, since it will be one of solutions cover the energy shortage in the future. Photovoltaic (PV) is the most popular and widely used among solar energy technologies. However, PV efficiency is fairly low and remains somewhat expensive. High temperature has a negative effect on PV efficiency and cooling system for these panels is vital, especially in warm weather conditions. This paper presents the results of a simulation study carried out on silicon solar cells to assess the effects of elevation on enhancing the efficiency of solar panels. The study included four different terrains. The study also took into account the direction of the wind hitting the solar panels. To ensure the simulation mimics reality, six silicon solar panels are designed in two columns and three rows, facing to the south at an angle of 30 o. The elevations are assumed to change from 10 meters to 200 meters. The results show that maximum increase in efficiency occurs when the wind comes from the north, hitting the back of the panels.Keywords: Solar panels, elevation, wind direction, efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23717773 A Proposal for U-City (Smart City) Service Method Using Real-Time Digital Map
Authors: SangWon Han, MuWook Pyeon, Sujung Moon, DaeKyo Seo
Abstract:
Recently, technologies based on three-dimensional (3D) space information are being developed and quality of life is improving as a result. Research on real-time digital map (RDM) is being conducted now to provide 3D space information. RDM is a service that creates and supplies 3D space information in real time based on location/shape detection. Research subjects on RDM include the construction of 3D space information with matching image data, complementing the weaknesses of image acquisition using multi-source data, and data collection methods using big data. Using RDM will be effective for space analysis using 3D space information in a U-City and for other space information utilization technologies.
Keywords: RDM, multi-source data, big data, U-City.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8057772 Design Optimization of Cutting Parameters when Turning Inconel 718 with Cermet Inserts
Authors: M. Aruna, V. Dhanalaksmi
Abstract:
Inconel 718, a nickel based super-alloy is an extensively used alloy, accounting for about 50% by weight of materials used in an aerospace engine, mainly in the gas turbine compartment. This is owing to their outstanding strength and oxidation resistance at elevated temperatures in excess of 5500 C. Machining is a requisite operation in the aircraft industries for the manufacture of the components especially for gas turbines. This paper is concerned with optimization of the surface roughness when turning Inconel 718 with cermet inserts. Optimization of turning operation is very useful to reduce cost and time for machining. The approach is based on Response Surface Method (RSM). In this work, second-order quadratic models are developed for surface roughness, considering the cutting speed, feed rate and depth of cut as the cutting parameters, using central composite design. The developed models are used to determine the optimum machining parameters. These optimized machining parameters are validated experimentally, and it is observed that the response values are in reasonable agreement with the predicted values.Keywords: Inconel 718, Optimization, Response Surface Methodology (RSM), Surface roughness
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28397771 Fractal - Wavelet Based Techniques for Improving the Artificial Neural Network Models
Authors: Reza Bazargan Lari, Mohammad H. Fattahi
Abstract:
Natural resources management including water resources requires reliable estimations of time variant environmental parameters. Small improvements in the estimation of environmental parameters would result in grate effects on managing decisions. Noise reduction using wavelet techniques is an effective approach for preprocessing of practical data sets. Predictability enhancement of the river flow time series are assessed using fractal approaches before and after applying wavelet based preprocessing. Time series correlation and persistency, the minimum sufficient length for training the predicting model and the maximum valid length of predictions were also investigated through a fractal assessment.
Keywords: Wavelet, de-noising, predictability, time series fractal analysis, valid length, ANN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20637770 Partial Oxidation of Methane in the Pulsed Compression Reactor: Experiments and Simulation
Authors: Timo Roestenberg, Maxim Glushenkov, Alexander Kronberg, Anton A. Verbeek, Theo H. vd Meer
Abstract:
The Pulsed Compression Reactor promises to be a compact, economical and energy efficient alternative to conventional chemical reactors. In this article, the production of synthesis gas using the Pulsed Compression Reactor is investigated. This is done experimentally as well as with simulations. The experiments are done by means of a single shot reactor, which replicates a representative, single reciprocation of the Pulsed Compression Reactor with great control over the reactant composition, reactor temperature and pressure and temperature history. Simulations are done with a relatively simple method, which uses different models for the chemistry and thermodynamic properties of the species in the reactor. Simulation results show very good agreement with the experimental data, and give great insight into the reaction processes that occur within the cycle.Keywords: Chemical reactors, Energy, Pulsed compressionreactor, Simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16417769 DeClEx-Processing Pipeline for Tumor Classification
Authors: Gaurav Shinde, Sai Charan Gongiguntla, Prajwal Shirur, Ahmed Hambaba
Abstract:
Health issues are significantly increasing, putting a substantial strain on healthcare services. This has accelerated the integration of machine learning in healthcare, particularly following the COVID-19 pandemic. The utilization of machine learning in healthcare has grown significantly. We introduce DeClEx, a pipeline which ensures that data mirrors real-world settings by incorporating gaussian noise and blur and employing autoencoders to learn intermediate feature representations. Subsequently, our convolutional neural network, paired with spatial attention, provides comparable accuracy to state-of-the-art pre-trained models while achieving a threefold improvement in training speed. Furthermore, we provide interpretable results using explainable AI techniques. We integrate denoising and deblurring, classification and explainability in a single pipeline called DeClEx.
Keywords: Machine learning, healthcare, classification, explainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 667768 Uplink Throughput Prediction in Cellular Mobile Networks
Authors: Engin Eyceyurt, Josko Zec
Abstract:
The current and future cellular mobile communication networks generate enormous amounts of data. Networks have become extremely complex with extensive space of parameters, features and counters. These networks are unmanageable with legacy methods and an enhanced design and optimization approach is necessary that is increasingly reliant on machine learning. This paper proposes that machine learning as a viable approach for uplink throughput prediction. LTE radio metric, such as Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), and Signal to Noise Ratio (SNR) are used to train models to estimate expected uplink throughput. The prediction accuracy with high determination coefficient of 91.2% is obtained from measurements collected with a simple smartphone application.Keywords: Drive test, LTE, machine learning, uplink throughput prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8957767 Simulation Method for Determining the Thermally Induced Displacement of Machine Tools – Experimental Validation and Utilization in the Design Process
Abstract:
A novel simulation method to determine the displacements of machine tools due to thermal factors is presented. The specific characteristic of this method is the employment of original CAD data from the design process chain, which is interpreted by an algorithm in terms of geometry-based allocation of convection and radiation parameters. Furthermore analogous models relating to the thermal behaviour of machine elements are automatically implemented, which were gained by extensive experimental testing with thermography imaging. With this a transient simulation of the thermal field and in series of the displacement of the machine tool is possible simultaneously during the design phase. This method was implemented and is already used industrially in the design of machining centres in order to improve the quality of herewith manufactured workpieces.
Keywords: Accuracy, design process, finite element analysis, machine tools, thermal simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20837766 Geostatistical Analysis and Mapping of Groundlevel Ozone in a Medium Sized Urban Area
Authors: F. J. Moral García, P. Valiente González, F. López Rodríguez
Abstract:
Ground-level tropospheric ozone is one of the air pollutants of most concern. It is mainly produced by photochemical processes involving nitrogen oxides and volatile organic compounds in the lower parts of the atmosphere. Ozone levels become particularly high in regions close to high ozone precursor emissions and during summer, when stagnant meteorological conditions with high insolation and high temperatures are common. In this work, some results of a study about urban ozone distribution patterns in the city of Badajoz, which is the largest and most industrialized city in Extremadura region (southwest Spain) are shown. Fourteen sampling campaigns, at least one per month, were carried out to measure ambient air ozone concentrations, during periods that were selected according to favourable conditions to ozone production, using an automatic portable analyzer. Later, to evaluate the ozone distribution at the city, the measured ozone data were analyzed using geostatistical techniques. Thus, first, during the exploratory analysis of data, it was revealed that they were distributed normally, which is a desirable property for the subsequent stages of the geostatistical study. Secondly, during the structural analysis of data, theoretical spherical models provided the best fit for all monthly experimental variograms. The parameters of these variograms (sill, range and nugget) revealed that the maximum distance of spatial dependence is between 302-790 m and the variable, air ozone concentration, is not evenly distributed in reduced distances. Finally, predictive ozone maps were derived for all points of the experimental study area, by use of geostatistical algorithms (kriging). High prediction accuracy was obtained in all cases as cross-validation showed. Useful information for hazard assessment was also provided when probability maps, based on kriging interpolation and kriging standard deviation, were produced.Keywords: Kriging, map, tropospheric ozone, variogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18697765 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.
Keywords: Diesel engine, machine learning, NOx emission, semi-empirical.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8557764 Distributed Data-Mining by Probability-Based Patterns
Authors: M. Kargar, F. Gharbalchi
Abstract:
In this paper a new method is suggested for distributed data-mining by the probability patterns. These patterns use decision trees and decision graphs. The patterns are cared to be valid, novel, useful, and understandable. Considering a set of functions, the system reaches to a good pattern or better objectives. By using the suggested method we will be able to extract the useful information from massive and multi-relational data bases.Keywords: Data-mining, Decision tree, Decision graph, Pattern, Relationship.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15567763 Effect of Friction Models on Stress Distribution of Sheet Materials during V-Bending Process
Authors: Maziar Ramezani, Zaidi Mohd Ripin
Abstract:
In a metal forming process, the friction between the material and the tools influences the process by modifying the stress distribution of the workpiece. This frictional behaviour is often taken into account by using a constant coefficient of friction in the finite element simulations of sheet metal forming processes. However, friction coefficient varies in time and space with many parameters. The Stribeck friction model is investigated in this study to predict springback behaviour of AA6061-T4 sheets during V-bending process. The coefficient of friction in Stribeck curve depends on sliding velocity and contact pressure. The plane-strain bending process is simulated in ABAQUS/Standard. We compared the computed punch load-stroke curves and springback related to the constant coefficient of friction with the defined friction model. The results clearly showed that the new friction model provides better agreement between experiments and results of numerical simulations. The influence of friction models on stress distribution in the workpiece is also studied numericallyKeywords: Friction model, Stress distribution, V-bending.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27407762 K-Means for Spherical Clusters with Large Variance in Sizes
Authors: A. M. Fahim, G. Saake, A. M. Salem, F. A. Torkey, M. A. Ramadan
Abstract:
Data clustering is an important data exploration technique with many applications in data mining. The k-means algorithm is well known for its efficiency in clustering large data sets. However, this algorithm is suitable for spherical shaped clusters of similar sizes and densities. The quality of the resulting clusters decreases when the data set contains spherical shaped with large variance in sizes. In this paper, we introduce a competent procedure to overcome this problem. The proposed method is based on shifting the center of the large cluster toward the small cluster, and recomputing the membership of small cluster points, the experimental results reveal that the proposed algorithm produces satisfactory results.Keywords: K-Means, Data Clustering, Cluster Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32817761 Biosorption of Heavy Metals Contaminating the Wonderfonteinspruit Catchment Area using Desmodesmus sp.
Authors: P.P. Diale, E. Muzenda, T.S. Matambo, D. Glasser, D. Hildebrandt, J. Zimba
Abstract:
A vast array of biological materials, especially algae have received increasing attention for heavy metal removal. Algae have been proven to be cheaper, more effective for the removal of metallic elements in aqueous solutions. A fresh water algal strain was isolated from Zoo Lake, Johannesburg, South Africa and identified as Desmodesmus sp. This paper investigates the efficacy of Desmodesmus sp.in removing heavy metals contaminating the Wonderfonteinspruit Catchment Area (WCA) water bodies. The biosorption data fitted the pseudo-second order and Langmuir isotherm models. The Langmuir maximum uptakes gave the sequence: Mn2+>Ni2+>Fe2+. The best results for kinetic study was obtained in concentration 120 ppm for Fe3+ and Mn2+, whilst for Ni2+ was at 20 ppm, which is about the same concentrations found in contaminated water in the WCA (Fe3+115 ppm, Mn2+ 121 ppm and Ni2+ 26.5 ppm).
Keywords: Biosorption, Green algae, Heavy metals, Remediation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20557760 Representing Data without Lost Compression Properties in Time Series: A Review
Authors: Nabilah Filzah Mohd Radzuan, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan
Abstract:
Uncertain data is believed to be an important issue in building up a prediction model. The main objective in the time series uncertainty analysis is to formulate uncertain data in order to gain knowledge and fit low dimensional model prior to a prediction task. This paper discusses the performance of a number of techniques in dealing with uncertain data specifically those which solve uncertain data condition by minimizing the loss of compression properties.
Keywords: Compression properties, uncertainty, uncertain time series, mining technique, weather prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620