Search results for: creative instruction model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7651

Search results for: creative instruction model

5011 Zero Dimensional Simulation of Combustion Process of a DI Diesel Engine Fuelled With Biofuels

Authors: Donepudi Jagadish, Ravi Kumar Puli, K. Madhu Murthy

Abstract:

A zero dimensional model has been used to investigate the combustion performance of a single cylinder direct injection diesel engine fueled by biofuels with options like supercharging and exhaust gas recirculation. The numerical simulation was performed at constant speed. The indicated pressure, temperature diagrams are plotted and compared for different fuels. The emissions of soot and nitrous oxide are computed with phenomenological models. The experimental work was also carried out with biodiesel (palm stearin methyl ester) diesel blends, ethanol diesel blends to validate simulation results with experimental results, and observed that the present model is successful in predicting the engine performance with biofuels.

Keywords: Biofuels Zero Dimensional Modeling, EnginePerformance, Engine Emissions

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4249
5010 A New Approach for Classifying Large Number of Mixed Variables

Authors: Hashibah Hamid

Abstract:

The issue of classifying objects into one of predefined groups when the measured variables are mixed with different types of variables has been part of interest among statisticians in many years. Some methods for dealing with such situation have been introduced that include parametric, semi-parametric and nonparametric approaches. This paper attempts to discuss on a problem in classifying a data when the number of measured mixed variables is larger than the size of the sample. A propose idea that integrates a dimensionality reduction technique via principal component analysis and a discriminant function based on the location model is discussed. The study aims in offering practitioners another potential tool in a classification problem that is possible to be considered when the observed variables are mixed and too large.

Keywords: classification, location model, mixed variables, principal component analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557
5009 An Algorithm for Autonomous Aerial Navigation using MATLAB® Mapping Tool Box

Authors: Mansoor Ahsan, Suhail Akhtar, Adnan Ali, Farrukh Mazhar, Muddssar Khalid

Abstract:

In the present era of aviation technology, autonomous navigation and control have emerged as a prime area of active research. Owing to the tremendous developments in the field, autonomous controls have led today’s engineers to claim that future of aerospace vehicle is unmanned. Development of guidance and navigation algorithms for an unmanned aerial vehicle (UAV) is an extremely challenging task, which requires efforts to meet strict, and at times, conflicting goals of guidance and control. In this paper, aircraft altitude and heading controllers and an efficient algorithm for self-governing navigation using MATLAB® mapping toolbox is presented which also enables loitering of a fixed wing UAV over a specified area. For this purpose, a nonlinear mathematical model of a UAV is used. The nonlinear model is linearized around a stable trim point and decoupled for controller design. The linear controllers are tested on the nonlinear aircraft model and navigation algorithm is subsequently developed for for autonomous flight of the UAV. The results are presented for trajectory controllers and waypoint based navigation. Our investigation reveals that MATLAB® mapping toolbox can be exploited to successfully deliver an efficient algorithm for autonomous aerial navigation for a UAV.

Keywords: Navigation, trajectory-control, unmanned aerial vehicle, PID-control, MATLAB® mapping toolbox.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4380
5008 Bioprocess Optimization Based On Relevance Vector Regression Models and Evolutionary Programming Technique

Authors: R. Simutis, V. Galvanauskas, D. Levisauskas, J. Repsyte

Abstract:

This paper proposes a bioprocess optimization procedure based on Relevance Vector Regression models and evolutionary programming technique. Relevance Vector Regression scheme allows developing a compact and stable data-based process model avoiding time-consuming modeling expenses. The model building and process optimization procedure could be done in a half-automated way and repeated after every new cultivation run. The proposed technique was tested in a simulated mammalian cell cultivation process. The obtained results are promising and could be attractive for optimization of industrial bioprocesses.

Keywords: Bioprocess optimization, Evolutionary programming, Relevance Vector Regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2195
5007 DWM-CDD: Dynamic Weighted Majority Concept Drift Detection for Spam Mail Filtering

Authors: Leili Nosrati, Alireza Nemaney Pour

Abstract:

Although e-mail is the most efficient and popular communication method, unwanted and mass unsolicited e-mails, also called spam mail, endanger the existence of the mail system. This paper proposes a new algorithm called Dynamic Weighted Majority Concept Drift Detection (DWM-CDD) for content-based filtering. The design purposes of DWM-CDD are first to accurate the performance of the previously proposed algorithms, and second to speed up the time to construct the model. The results show that DWM-CDD can detect both sudden and gradual changes quickly and accurately. Moreover, the time needed for model construction is less than previously proposed algorithms.

Keywords: Concept drift, Content-based filtering, E-mail, Spammail.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962
5006 Modeling and Optimization of Abrasive Waterjet Parameters using Regression Analysis

Authors: Farhad Kolahan, A. Hamid Khajavi

Abstract:

Abrasive waterjet is a novel machining process capable of processing wide range of hard-to-machine materials. This research addresses modeling and optimization of the process parameters for this machining technique. To model the process a set of experimental data has been used to evaluate the effects of various parameter settings in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. Depth of cut, as one of the most important output characteristics, has been evaluated based on different parameter settings. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. The pairwise effects of process parameters settings on process response outputs are also shown graphically. The proposed model is then embedded into a Simulated Annealing algorithm to optimize the process parameters. The optimization is carried out for any desired values of depth of cut. The objective is to determine proper levels of process parameters in order to obtain a certain level of depth of cut. Computational results demonstrate that the proposed solution procedure is quite effective in solving such multi-variable problems.

Keywords: AWJ cutting, Mathematical modeling, Simulated Annealing, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2154
5005 Research of Dynamics Picking Mechanism of Sulzer Projectile Loom

Authors: A. Jomartov, K. Jomartova

Abstract:

One of the main and responsible units of Sulzer projectile loom is picking mechanism. It is specifically designed to accelerate projectile to speed of 25 m / s. Initial speed projectile of Sulzer projectile loom is independent of speed loom and determined the potential energy torsion rod. This paper investigates the dynamics picking mechanism of Sulzer projectile loom during its discharge. A result of calculation model, we obtain the law of motion lever of picking mechanism during its discharge. Construction of dynamic model the picking mechanism of Sulzer projectile loom on software complex SimulationX can make calculations for different thickness of torsion rods taking into account the backlashes in the connections, the dissipative forces and resistance forces

Keywords: Dynamics, loom, picking mechanism, projectile, SimulationX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3589
5004 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework

Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim

Abstract:

Background modeling and subtraction in video analysis has been widely used as an effective method for moving objects detection in many computer vision applications. Recently, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are the most frequently occurred problems in the practical situation. This paper presents a favorable two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean value of each RGB color channel. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the output of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate very competitive performance compared to previous models.

Keywords: Background subtraction, codebook model, local binary pattern, dynamic background, illumination changes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
5003 Analysis of Surface Hardness, Surface Roughness, and Near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process

Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.

Abstract:

In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the surface hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor hobson talysurf tester, micro vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer. 

Keywords: Surface hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
5002 Simulation Study of Radial Heat and Mass Transfer Inside a Fixed Bed Catalytic Reactor

Authors: K. Vakhshouri, M.M. Y. Motamed Hashemi

Abstract:

A rigorous two-dimensional model is developed for simulating the operation of a less-investigated type steam reformer having a considerably lower operating Reynolds number, higher tube diameter, and non-availability of extra steam in the feed compared with conventional steam reformers. Simulation results show that reasonable predictions can only be achieved when certain correlations for wall to fluid heat transfer equations are applied. Due to severe operating conditions, in all cases, strong radial temperature gradients inside the reformer tubes have been found. Furthermore, the results show how a certain catalyst loading profile will affect the operation of the reformer.

Keywords: Steam reforming, direct reduction, heat transfer, two-dimensional model, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3644
5001 Towards Innovation Performance among University Staff

Authors: C. S. Quah, S. P. L. Sim

Abstract:

This study examined how individuals in their respective teams contributed to innovation performance besides defining the term of innovation in their own respective views. This study also identified factors that motivated University staff to contribute to the innovation products. In addition, it examined whether there is a significant relationship between professional training level and the length of service among university staff towards innovation and to what extent do the two variables contributed towards innovative products. The significance of this study is that it revealed the strengths and weaknesses of the university staff when contributing to innovation performance. Stratified-random sampling was employed to determine the samples representing the population of lecturers in the study, involving 123 lecturers in one of the local universities in Malaysia. The method employed to analyze the data is through categorizing into themes for the open-ended questions besides using descriptive and inferential statistics for the quantitative data. This study revealed that two types of definition for the term “innovation” exist among the university staff, namely, creation of new product or new approach to do things as well as value-added creative way to upgrade or improve existing process and service to be more efficient. This study found that the most prominent factor that propels them towards innovation is to improve the product in order to benefit users, followed by selfsatisfaction and recognition. This implies that the staff in the organization viewed the creation of innovative products as a process of growth to fulfill the needs of others and also to realize their personal potential. This study also found that there was only a significant relationship between the professional training level and the length of service of 4 - 6 years among the university staff. The rest of the groups based on the length of service showed that there was no significant relationship with the professional training level towards innovation. Moreover, results of the study on directional measures depicted that the relationship for the length of service of 4- 6 years with professional training level among the university staff is quite weak. This implies that good organization management lies on the shoulders of the key leaders who enlighten the path to be followed by the staff.

Keywords: Innovation, length of service, performance, professional training level, motivation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
5000 Continuous Feature Adaptation for Non-Native Speech Recognition

Authors: Y. Deng, X. Li, C. Kwan, B. Raj, R. Stern

Abstract:

The current speech interfaces in many military applications may be adequate for native speakers. However, the recognition rate drops quite a lot for non-native speakers (people with foreign accents). This is mainly because the nonnative speakers have large temporal and intra-phoneme variations when they pronounce the same words. This problem is also complicated by the presence of large environmental noise such as tank noise, helicopter noise, etc. In this paper, we proposed a novel continuous acoustic feature adaptation algorithm for on-line accent and environmental adaptation. Implemented by incremental singular value decomposition (SVD), the algorithm captures local acoustic variation and runs in real-time. This feature-based adaptation method is then integrated with conventional model-based maximum likelihood linear regression (MLLR) algorithm. Extensive experiments have been performed on the NATO non-native speech corpus with baseline acoustic model trained on native American English. The proposed feature-based adaptation algorithm improved the average recognition accuracy by 15%, while the MLLR model based adaptation achieved 11% improvement. The corresponding word error rate (WER) reduction was 25.8% and 2.73%, as compared to that without adaptation. The combined adaptation achieved overall recognition accuracy improvement of 29.5%, and WER reduction of 31.8%, as compared to that without adaptation.

Keywords: speaker adaptation; environment adaptation; robust speech recognition; SVD; non-native speech recognition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3217
4999 Modelling the Occurrence of Defects and Change Requests during User Acceptance Testing

Authors: Kevin McDaid, Simon P. Wilson

Abstract:

Software developed for a specific customer under contract typically undergoes a period of testing by the customer before acceptance. This is known as user acceptance testing and the process can reveal both defects in the system and requests for changes to the product. This paper uses nonhomogeneous Poisson processes to model a real user acceptance data set from a recently developed system. In particular a split Poisson process is shown to provide an excellent fit to the data. The paper explains how this model can be used to aid the allocation of resources through the accurate prediction of occurrences both during the acceptance testing phase and before this activity begins.

Keywords: User acceptance testing. Software reliability growth modelling. Split Poisson process. Bayesian methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2340
4998 Using RASCAL Code to Analyze the Postulated UF6 Fire Accident

Authors: J. R. Wang, Y. Chiang, W. S. Hsu, S. H. Chen, J. H. Yang, S. W. Chen, C. Shih, Y. F. Chang, Y. H. Huang, B. R. Shen

Abstract:

In this research, the RASCAL code was used to simulate and analyze the postulated UF6 fire accident which may occur in the Institute of Nuclear Energy Research (INER). There are four main steps in this research. In the first step, the UF6 data of INER were collected. In the second step, the RASCAL analysis methodology and model was established by using these data. Third, this RASCAL model was used to perform the simulation and analysis of the postulated UF6 fire accident. Three cases were simulated and analyzed in this step. Finally, the analysis results of RASCAL were compared with the hazardous levels of the chemicals. According to the compared results of three cases, Case 3 has the maximum danger in human health.

Keywords: RASCAL, UF6, Safety, Hydrogen fluoride.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 870
4997 Service Architecture for 3rd Party Operator's Participation

Authors: F. Sarabchi, A. H. Darvishan, H. Yeganeh, H. Ahmadian

Abstract:

Next generation networks with the idea of convergence of service and control layer in existing networks (fixed, mobile and data) and with the intention of providing services in an integrated network, has opened new horizon for telecom operators. On the other hand, economic problems have caused operators to look for new source of income including consider new services, subscription of more users and their promotion in using morenetwork resources and easy participation of service providers or 3rd party operators in utilizing networks. With this requirement, an architecture based on next generation objectives for service layer is necessary. In this paper, a new architecture based on IMS model explains participation of 3rd party operators in creation and implementation of services on an integrated telecom network.

Keywords: Service model, IMS, API, Scripting language, JAIN, Parlay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1473
4996 The Impact of Knowledge Sharing on Innovation Capability in United Arab Emirates Organizations

Authors: S. Abdallah, A. Khalil, A. Divine

Abstract:

The purpose of this study was to explore the relationship between knowledge sharing and innovation capability, by examining the influence of individual, organizational and technological factors on knowledge sharing. The research is based on a survey of 103 employees from different organizations in the United Arab Emirates. The study is based on a model and a questionnaire that was previously tested by Lin [1]. Thus, the study aims at examining the validity of that model in UAE context. The results of the research show varying degrees of correlation between the different variables, with ICT use having the strongest relationship with the innovation capabilities of organizations. The study also revealed little evidence of knowledge collecting and knowledge sharing among UAE employees.

Keywords: Knowledge sharing, Organization Innovation, Technology Use, Innovation Capabilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2509
4995 Two New Relative Efficiencies of Linear Weighted Regression

Authors: Shuimiao Wan, Chao Yuan, Baoguang Tian

Abstract:

In statistics parameter theory, usually the parameter estimations have two kinds, one is the least-square estimation (LSE), and the other is the best linear unbiased estimation (BLUE). Due to the determining theorem of minimum variance unbiased estimator (MVUE), the parameter estimation of BLUE in linear model is most ideal. But since the calculations are complicated or the covariance is not given, people are hardly to get the solution. Therefore, people prefer to use LSE rather than BLUE. And this substitution will take some losses. To quantize the losses, many scholars have presented many kinds of different relative efficiencies in different views. For the linear weighted regression model, this paper discusses the relative efficiencies of LSE of β to BLUE of β. It also defines two new relative efficiencies and gives their lower bounds.

Keywords: Linear weighted regression, Relative efficiency, Lower bound, Parameter estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2118
4994 A Fuzzy Nonlinear Regression Model for Interval Type-2 Fuzzy Sets

Authors: O. Poleshchuk, E.Komarov

Abstract:

This paper presents a regression model for interval type-2 fuzzy sets based on the least squares estimation technique. Unknown coefficients are assumed to be triangular fuzzy numbers. The basic idea is to determine aggregation intervals for type-1 fuzzy sets, membership functions of whose are low membership function and upper membership function of interval type-2 fuzzy set. These aggregation intervals were called weighted intervals. Low and upper membership functions of input and output interval type-2 fuzzy sets for developed regression models are considered as piecewise linear functions.

Keywords: Interval type-2 fuzzy sets, fuzzy regression, weighted interval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218
4993 A Description Logics Based Approach for Building Multi-Viewpoints Ontologies

Authors: M. Hemam, M. Djezzar, T. Djouad

Abstract:

We are interested in the problem of building an ontology in a heterogeneous organization, by taking into account different viewpoints and different terminologies of communities in the organization. Such ontology, that we call multi-viewpoint ontology, confers to the same universe of discourse, several partial descriptions, where each one is relative to a particular viewpoint. In addition, these partial descriptions share at global level, ontological elements constituent a consensus between the various viewpoints. In order to provide response elements to this problem we define a multi-viewpoints knowledge model based on viewpoint and ontology notions. The multi-viewpoints knowledge model is used to formalize the multi-viewpoints ontology in description logics language.

Keywords: Description logic, knowledge engineering, ontology, viewpoint.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1023
4992 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

Authors: Saleem Z. Ramadan

Abstract:

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the Pth percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Keywords: Reliability, Accelerated life testing, Cumulative exposure model, Bayesian estimation, Progressive Type-I censoring, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161
4991 River Stage-Discharge Forecasting Based on Multiple-Gauge Strategy Using EEMD-DWT-LSSVM Approach

Authors: Farhad Alizadeh, Alireza Faregh Gharamaleki, Mojtaba Jalilzadeh, Houshang Gholami, Ali Akhoundzadeh

Abstract:

This study presented hybrid pre-processing approach along with a conceptual model to enhance the accuracy of river discharge prediction. In order to achieve this goal, Ensemble Empirical Mode Decomposition algorithm (EEMD), Discrete Wavelet Transform (DWT) and Mutual Information (MI) were employed as a hybrid pre-processing approach conjugated to Least Square Support Vector Machine (LSSVM). A conceptual strategy namely multi-station model was developed to forecast the Souris River discharge more accurately. The strategy used herein was capable of covering uncertainties and complexities of river discharge modeling. DWT and EEMD was coupled, and the feature selection was performed for decomposed sub-series using MI to be employed in multi-station model. In the proposed feature selection method, some useless sub-series were omitted to achieve better performance. Results approved efficiency of the proposed DWT-EEMD-MI approach to improve accuracy of multi-station modeling strategies.

Keywords: River stage-discharge process, LSSVM, discrete wavelet transform (DWT), ensemble empirical decomposition mode (EEMD), multi-station modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 664
4990 Corruption, Economic Growth, and Income Inequality: Evidence from Ten Countries in Asia

Authors: Chiung-Ju Huang

Abstract:

This study utilizes the panel vector error correction model (PVECM) to examine the relationship among corruption, economic growth, and income inequality experienced within ten Asian countries over the 1995 to 2010 period. According to the empirical results, we do not support the common perception that corruption decreases economic growth. On the contrary, we found that corruption increases economic growth. Meanwhile, an increase in economic growth will cause an increase in income inequality, although the effect is insignificant. Similarly, an increase in income inequality will cause an increase in economic growth but a decrease in corruption, although the effect is also insignificant.

Keywords: Corruption, economic growth, income inequality, panel vector error correction model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3368
4989 Solar Radiation Time Series Prediction

Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs

Abstract:

A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.

Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2762
4988 A Comparative Study of Standard, Casted and Riveted Eye Design of a Mono Leaf Spring Using CAE Tools

Authors: Gian Bhushan, Vinkel Arora, M.L. Aggarwal

Abstract:

The objective of the present study is to determine better eye end design of a mono leaf spring used in light motor vehicle. A conventional 65Si7 spring steel leaf spring model with standard eye, casted and riveted eye end are considered. The CAD model of the leaf springs is prepared in CATIA and analyzed using ANSYS. The standard eye, casted and riveted eye leaf springs are subjected to similar loading conditions. The CAE analysis of the leaf spring is performed for various parameters like deflection and Von- Mises stress. Mass reduction of 62.9% is achieved in case of riveted eye mono leaf spring as compared to standard eye mono leaf spring for the same loading conditions.

Keywords: CAE, Leaf Spring, 65Si7 spring steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2380
4987 An Agent Based Simulation for Network Formation with Heterogeneous Agents

Authors: Hisashi Kojima, Masatora Daito

Abstract:

We investigate an asymmetric connections model with a dynamic network formation process, using an agent based simulation. We permit heterogeneity of agents- value. Valuable persons seem to have many links on real social networks. We focus on this point of view, and examine whether valuable agents change the structures of the terminal networks. Simulation reveals that valuable agents diversify the terminal networks. We can not find evidence that valuable agents increase the possibility that star networks survive the dynamic process. We find that valuable agents disperse the degrees of agents in each terminal network on an average.

Keywords: network formation, agent based simulation, connections model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1288
4986 Flow Characteristics Impeller Change of an Axial Turbo Fan

Authors: Young-Kyun Kim, Tae-Gu Lee, Jin-Huek Hur, Sung-Jae Moon, Jae-Heon Lee

Abstract:

In this paper, three dimensional flow characteristic was presented by a revision of an impeller of an axial turbo fan for improving the airflow rate and the static pressure. TO consider an incompressible steady three-dimensional flow, the RANS equations are used as the governing equations, and the standard k-ε turbulence model is chosen. The pitch angles of 44°, 54°, 59°, and 64° are implemented for the numerical model. The numerical results show that airflow rates of each pitch angle are 1,175 CMH, 1,270 CMH, 1,340 CMH, and 800 CMH, respectively. The difference of the static pressure at impeller inlet and outlet are 120 Pa, 214 Pa, 242 Pa, and 60 Pa according to respective pitch angles. It means that the 59° of the impeller pitch angle is optimal to improve the airflow rate and the static pressure.

Keywords: Axial turbo fan, Impeller, Blade, Pitch angle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2694
4985 Vibration of a Beam on an Elastic Foundation Using the Variational Iteration Method

Authors: Desmond Adair, Kairat Ismailov, Martin Jaeger

Abstract:

Modelling of Timoshenko beams on elastic foundations has been widely used in the analysis of buildings, geotechnical problems, and, railway and aerospace structures. For the elastic foundation, the most widely used models are one-parameter mechanical models or two-parameter models to include continuity and cohesion of typical foundations, with the two-parameter usually considered the better of the two. Knowledge of free vibration characteristics of beams on an elastic foundation is considered necessary for optimal design solutions in many engineering applications, and in this work, the efficient and accurate variational iteration method is developed and used to calculate natural frequencies of a Timoshenko beam on a two-parameter foundation. The variational iteration method is a technique capable of dealing with some linear and non-linear problems in an easy and efficient way. The calculations are compared with those using a finite-element method and other analytical solutions, and it is shown that the results are accurate and are obtained efficiently. It is found that the effect of the presence of the two-parameter foundation is to increase the beam’s natural frequencies and this is thought to be because of the shear-layer stiffness, which has an effect on the elastic stiffness. By setting the two-parameter model’s stiffness parameter to zero, it is possible to obtain a one-parameter foundation model, and so, comparison between the two foundation models is also made.

Keywords: Timoshenko beam, variational iteration method, two-parameter elastic foundation model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976
4984 Data Analysis Techniques for Predictive Maintenance on Fleet of Heavy-Duty Vehicles

Authors: Antonis Sideris, Elias Chlis Kalogeropoulos, Konstantia Moirogiorgou

Abstract:

The present study proposes a methodology for the efficient daily management of fleet vehicles and construction machinery. The application covers the area of remote monitoring of heavy-duty vehicles operation parameters, where specific sensor data are stored and examined in order to provide information about the vehicle’s health. The vehicle diagnostics allow the user to inspect whether maintenance tasks need to be performed before a fault occurs. A properly designed machine learning model is proposed for the detection of two different types of faults through classification. Cross validation is used and the accuracy of the trained model is checked with the confusion matrix.

Keywords: Fault detection, feature selection, machine learning, predictive maintenance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
4983 Continuous Fixed Bed Reactor Application for Decolourization of Textile Effluent by Adsorption on NaOH Treated Eggshell

Authors: M. Chafi, S. Akazdam, C. Asrir, L. Sebbahi, B. Gourich, N. Barka, M. Essahli

Abstract:

Fixed bed adsorption has become a frequently used industrial application in wastewater treatment processes. Various low cost adsorbents have been studied for their applicability in treatment of different types of effluents. In this work, the intention of the study was to explore the efficacy and feasibility for azo dye, Acid Orange 7 (AO7) adsorption onto fixed bed column of NaOH Treated eggshell (TES). The effect of various parameters like flow rate, initial dye concentration, and bed height were exploited in this study. The studies confirmed that the breakthrough curves were dependent on flow rate, initial dye concentration solution of AO7 and bed depth. The Thomas, Yoon–Nelson, and Adams and Bohart models were analysed to evaluate the column adsorption performance. The adsorption capacity, rate constant and correlation coefficient associated to each model for column adsorption was calculated and mentioned. The column experimental data were fitted well with Thomas model with coefficients of correlation R2 ≥0.93 at different conditions but the Yoon–Nelson, BDST and Bohart–Adams model (R2=0.911), predicted poor performance of fixed-bed column. The (TES) was shown to be suitable adsorbent for adsorption of AO7 using fixed-bed adsorption column.

Keywords: Adsorption models, acid orange 7, bed depth, breakthrough, dye adsorption, fixed-bed column, treated eggshell.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2270
4982 3D Dynamic Representation System for the Human Head

Authors: Laurenţiu Militeanu, Cristina Gena Dascâlu, D. Cristea

Abstract:

The human head representations usually are based on the morphological – structural components of a real model. Over the time became more and more necessary to achieve full virtual models that comply very rigorous with the specifications of the human anatomy. Still, making and using a model perfectly fitted with the real anatomy is a difficult task, because it requires large hardware resources and significant times for processing. That is why it is necessary to choose the best compromise solution, which keeps the right balance between the details perfection and the resources consumption, in order to obtain facial animations with real-time rendering. We will present here the way in which we achieved such a 3D system that we intend to use as a base point in order to create facial animations with real-time rendering, used in medicine to find and to identify different types of pathologies.

Keywords: 3D models, virtual reality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456