Search results for: Monte Carlo methods
3291 An Experimental and Numerical Investigation of Press Force and Weld Line Displacement of Tailor Welded Blanks in Conventional and Rubber Pad Sheet Metal Forming
Authors: Amir Ansari, Ehsan Shahrjerdi, Ehsan Amini
Abstract:
To investigate the behavior of sheet metals during forming tailor welded blanks (TWB) of various thickness made via Co2 Laser welding are under consideration. These blanks are formed used two different forming methods of rubber as well as the conventional punch and die methods. The main research objective is the effects of using a rubber die instead of a solid one the displacement of the weld line and the press force needed for forming. Specimens with thicknesses of 0.5, 0.6, 0.8 and 1mm are subjected to Erichsen two dimensional tests and the resulted force for each case are compared. This is followed by a theoretical and numerical study of press force and weld line displacement. It is concluded that using rubber pad forming (RPF) causes a reduction in weld line displacement and an increase in the press force.Keywords: Rubber pad forming, Tailor welded blank, Thickness ratio, Weld line displacement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16183290 Analysis of Diverse Cluster Ensemble Techniques
Authors: S. Sarumathi, N. Shanthi, P. Ranjetha
Abstract:
Data mining is the procedure of determining interesting patterns from the huge amount of data. With the intention of accessing the data faster the most supporting processes needed is clustering. Clustering is the process of identifying similarity between data according to the individuality present in the data and grouping associated data objects into clusters. Cluster ensemble is the technique to combine various runs of different clustering algorithms to obtain a general partition of the original dataset, aiming for consolidation of outcomes from a collection of individual clustering outcomes. The performances of clustering ensembles are mainly affecting by two principal factors such as diversity and quality. This paper presents the overview about the different cluster ensemble algorithm along with their methods used in cluster ensemble to improve the diversity and quality in the several cluster ensemble related papers and shows the comparative analysis of different cluster ensemble also summarize various cluster ensemble methods. Henceforth this clear analysis will be very useful for the world of clustering experts and also helps in deciding the most appropriate one to determine the problem in hand.Keywords: Cluster Ensemble, Consensus Function, CSPA, Diversity, HGPA, MCLA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18403289 Banks Profitability Indicators in CEE Countries
Abstract:
The aim of the present article is to determine the impact of the external and internal factors of bank performance on the profitability indicators of the CEE countries banks in the period from 2006 to 2012. On the basis of research conducted abroad on bank and macroeconomic profitability indicators, in order to obtain research results, the authors evaluated return on average assets (ROAA) and return on average equity (ROAE) indicators of the CEE countries banks. The authors analyzed profitability indicators of banks using descriptive methods, SPSS data analysis methods, as well as data correlation and linear regression analysis. The authors concluded that most internal and external indicators of bank performance have no direct influence the profitability of the banks in the CEE countries. The only exceptions are credit risk and bank size, which affect one of the measures of bank profitability – return on average equity.
Keywords: Banks, CEE countries, Profitability ROAA, ROAE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26613288 Object Speed Estimation by using Fuzzy Set
Authors: Hossein Pazhoumand-Dar, Amir Mohsen Toliyat Abolhassani, Ehsan Saeedi
Abstract:
Speed estimation is one of the important and practical tasks in machine vision, Robotic and Mechatronic. the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis has generated a great deal of interest in machine vision algorithms. Numerous approaches for speed estimation have been proposed. So classification and survey of the proposed methods can be very useful. The goal of this paper is first to review and verify these methods. Then we will propose a novel algorithm to estimate the speed of moving object by using fuzzy concept. There is a direct relation between motion blur parameters and object speed. In our new approach we will use Radon transform to find direction of blurred image, and Fuzzy sets to estimate motion blur length. The most benefit of this algorithm is its robustness and precision in noisy images. Our method was tested on many images with different range of SNR and is satisfiable.
Keywords: Blur Analysis, Fuzzy sets, Speed estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18783287 Automata Theory Approach for Solving Frequent Pattern Discovery Problems
Authors: Renáta Iváncsy, István Vajk
Abstract:
The various types of frequent pattern discovery problem, namely, the frequent itemset, sequence and graph mining problems are solved in different ways which are, however, in certain aspects similar. The main approach of discovering such patterns can be classified into two main classes, namely, in the class of the levelwise methods and in that of the database projection-based methods. The level-wise algorithms use in general clever indexing structures for discovering the patterns. In this paper a new approach is proposed for discovering frequent sequences and tree-like patterns efficiently that is based on the level-wise issue. Because the level-wise algorithms spend a lot of time for the subpattern testing problem, the new approach introduces the idea of using automaton theory to solve this problem.Keywords: Frequent pattern discovery, graph mining, pushdownautomaton, sequence mining, state machine, tree mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16263286 Investigation about Mechanical Equipment Needed to Break the Molecular Bonds of Heavy Oil by Using Hydrodynamic Cavitation
Authors: Mahdi Asghari
Abstract:
The cavitation phenomenon is the formation and production of micro-bubbles and eventually the bursting of the micro-bubbles inside the liquid fluid, which results in localized high pressure and temperature, causing physical and chemical fluid changes. This pressure and temperature are predicted to be 2000 atmospheres and 5000 °C, respectively. As a result of small bubbles bursting from this process, temperature and pressure increase momentarily and locally, so that the intensity and magnitude of these temperatures and pressures provide the energy needed to break the molecular bonds of heavy compounds such as fuel oil. In this paper, we study the theory of cavitation and the methods of cavitation production by acoustic and hydrodynamic methods and the necessary mechanical equipment and reactors for industrial application of the hydrodynamic cavitation method to break down the molecular bonds of the fuel oil and convert it into useful and economical products.
Keywords: Cavitation, hydrodynamic cavitation, cavitation reactor, fuel oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5783285 A Relationship Extraction Method from Literary Fiction Considering Korean Linguistic Features
Authors: Hee-Jeong Ahn, Kee-Won Kim, Seung-Hoon Kim
Abstract:
The knowledge of the relationship between characters can help readers to understand the overall story or plot of the literary fiction. In this paper, we present a method for extracting the specific relationship between characters from a Korean literary fiction. Generally, methods for extracting relationships between characters in text are statistical or computational methods based on the sentence distance between characters without considering Korean linguistic features. Furthermore, it is difficult to extract the relationship with direction from text, such as one-sided love, because they consider only the weight of relationship, without considering the direction of the relationship. Therefore, in order to identify specific relationships between characters, we propose a statistical method considering linguistic features, such as syntactic patterns and speech verbs in Korean. The result of our method is represented by a weighted directed graph of the relationship between the characters. Furthermore, we expect that proposed method could be applied to the relationship analysis between characters of other content like movie or TV drama.
Keywords: Data mining, Korean linguistic feature, literary fiction, relationship extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17943284 Calcium Silicate Bricks – Ultrasonic Pulse Method: Effects of Natural Frequency of Transducers on Measurement Results
Authors: Jiri Brozovsky
Abstract:
Modulus of elasticity is one of the important parameters of construction materials, which considerably influence their deformation properties and which can also be determined by means of non-destructive test methods like ultrasonic pulse method. However, measurement results of ultrasonic pulse methods are influenced by various factors, one of which is the natural frequency of the transducers. The paper states knowledge about influence of natural frequency of the transducers (54; 82 and 150kHz) on ultrasonic pulse velocity and dynamic modulus of elasticity (Young's Dynamic modulus of elasticity). Differences between ultrasonic pulse velocity and dynamic modulus of elasticity were found with the same smallest dimension of test specimen in the direction of sounding and density their value decreases as the natural frequency of transducers grew.
Keywords: Calcium silicate brick, ultrasonic pulse method, ultrasonic pulse velocity, dynamic modulus of elasticity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22443283 Developing Well-Being Indicators and Measurement Methods as Illustrated by Projects Aimed at Preventing Obesity in Children
Authors: E. Grochowska-Niedworok, K. Brukało, M. Hadasik, M. Kardas
Abstract:
Consumption of vegetables by school children and adolescents is essential for their normal growth, development and health, but a significant minority of the world's population consumes the right amount of these products. The aim of the study was to evaluate the preferences and frequency of consumption of vegetables by school children and adolescents. It has been assumed that effectively implemented nutrition education programs should have an impact on increasing the frequency of vegetable consumption among the recipients. The study covered 514 students of five schools in the Opole Voivodeship aged 9 years to 22 years. The research tool was an author's questionnaire, which consisted of closed questions on the frequency of vegetable consumption and the use of 10 ways to treat them. Preferences and frequencies are shown in percentages, while correlations were estimated on the basis of Cramer`s V and gamma coefficients. In each of the examined age groups, the relationship between sex and vegetable consumption (the Cramer`s V coefficient value was 0.06 to 0.38) was determined and the various methods of culinary processing were used (V Craméra was 0.08 to 0.34). For both sexes, the relationship between age and frequency of vegetable consumption was shown (gamma values ranged from ~ 0.00 to 0.39) and different cooking methods (gamma values were 0.01 to 0.22). The most important determinant of nutritional choices is the taste and availability of products. The fact that they have a positive effect on their health is only in third position. As has been shown, obesity prevention programs can not only address nutrition education but also teach about new flavors and increase the availability of healthy foods. In addition, the frequency of vegetable consumption can be a good indicator reflecting the healthy behaviors of children and adolescents.
Keywords: Children and adolescents, frequency, welfare rate, vegetables.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10153282 Enhanced Approaches to Rectify the Noise, Illumination and Shadow Artifacts
Authors: M. Sankari, C. Meena
Abstract:
Enhancing the quality of two dimensional signals is one of the most important factors in the fields of video surveillance and computer vision. Usually in real-life video surveillance, false detection occurs due to the presence of random noise, illumination and shadow artifacts. The detection methods based on background subtraction faces several problems in accurately detecting objects in realistic environments: In this paper, we propose a noise removal algorithm using neighborhood comparison method with thresholding. The illumination variations correction is done in the detected foreground objects by using an amalgamation of techniques like homomorphic decomposition, curvelet transformation and gamma adjustment operator. Shadow is removed using chromaticity estimator with local relation estimator. Results are compared with the existing methods and prove as high robustness in the video surveillance.
Keywords: Chromaticity Estimator, Curvelet Transformation, Denoising, Gamma correction, Homomorphic, Neighborhood Assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19593281 DIFFER: A Propositionalization approach for Learning from Structured Data
Authors: Thashmee Karunaratne, Henrik Böstrom
Abstract:
Logic based methods for learning from structured data is limited w.r.t. handling large search spaces, preventing large-sized substructures from being considered by the resulting classifiers. A novel approach to learning from structured data is introduced that employs a structure transformation method, called finger printing, for addressing these limitations. The method, which generates features corresponding to arbitrarily complex substructures, is implemented in a system, called DIFFER. The method is demonstrated to perform comparably to an existing state-of-art method on some benchmark data sets without requiring restrictions on the search space. Furthermore, learning from the union of features generated by finger printing and the previous method outperforms learning from each individual set of features on all benchmark data sets, demonstrating the benefit of developing complementary, rather than competing, methods for structure classification.Keywords: Machine learning, Structure classification, Propositionalization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12213280 Improving Injection Moulding Processes Using Experimental Design
Authors: Yousef Amer, Mehdi Moayyedian, Zeinab Hajiabolhasani, Lida Moayyedian
Abstract:
Moulded parts contribute to more than 70% of components in products. However, common defects particularly in plastic injection moulding exist such as: warpage, shrinkage, sink marks, and weld lines. In this paper Taguchi experimental design methods are applied to reduce the warpage defect of thin plate Acrylonitrile Butadiene Styrene (ABS) and are demonstrated in two levels; namely, orthogonal arrays of Taguchi and the Analysis of Variance (ANOVA). Eight trials have been run in which the optimal parameters that can minimize the warpage defect in factorial experiment are obtained. The results obtained from ANOVA approach analysis with respect to those derived from MINITAB illustrate the most significant factors which may cause warpage in injection moulding process. Moreover, ANOVA approach in comparison with other approaches like S/N ratio is more accurate and with the interaction of factors it is possible to achieve higher and the better outcomes.Keywords: Analysis of variance, ANOVA, plastic injection mould, Taguchi methods, Warpage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38953279 Application of Transportation Linear Programming Algorithms to Cost Reduction in Nigeria Soft Drinks Industry
Authors: A. O. Salami
Abstract:
The transportation problems are primarily concerned with the optimal way in which products produced at different plants (supply origins) are transported to a number of warehouses or customers (demand destinations). The objective in a transportation problem is to fully satisfy the destination requirements within the operating production capacity constraints at the minimum possible cost. The objective of this study is to determine ways of minimizing transportation cost in order to maximum profit. Data were sourced from the records of the Distribution Department of 7-Up Bottling Company Plc., Ilorin, Kwara State, Nigeria. The data were computed and analyzed using the three methods of solving transportation problem. The result shows that the three methods produced the same total transportation costs amounting to N1, 358, 019, implying that any of the method can be adopted by the company in transporting its final products to the wholesale dealers in order to minimize total production cost.
Keywords: Allocation problem, Cost Minimization, Distribution system, Resources utilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 88013278 Kernel Matching versus Inverse Probability Weighting: A Comparative Study
Authors: Andy Handouyahia, Tony Haddad, Frank Eaton
Abstract:
Recent quasi-experimental evaluation of the Canadian Active Labour Market Policies (ALMP) by Human Resources and Skills Development Canada (HRSDC) has provided an opportunity to examine alternative methods to estimating the incremental effects of Employment Benefits and Support Measures (EBSMs) on program participants. The focus of this paper is to assess the efficiency and robustness of inverse probability weighting (IPW) relative to kernel matching (KM) in the estimation of program effects. To accomplish this objective, the authors compare pairs of 1,080 estimates, along with their associated standard errors, to assess which type of estimate is generally more efficient and robust. In the interest of practicality, the authorsalso document the computationaltime it took to produce the IPW and KM estimates, respectively.
Keywords: Treatment effect, causal inference, observational studies, Propensity score based matching, Kernel Matching, Inverse Probability Weighting, Estimation methods for incremental effect.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69243277 Forecasting Stock Indexes Using Bayesian Additive Regression Tree
Authors: Darren Zou
Abstract:
Forecasting the stock market is a very challenging task. Various economic indicators such as GDP, exchange rates, interest rates, and unemployment have a substantial impact on the stock market. Time series models are the traditional methods used to predict stock market changes. In this paper, a machine learning method, Bayesian Additive Regression Tree (BART) is used in predicting stock market indexes based on multiple economic indicators. BART can be used to model heterogeneous treatment effects, and thereby works well when models are misspecified. It also has the capability to handle non-linear main effects and multi-way interactions without much input from financial analysts. In this research, BART is proposed to provide a reliable prediction on day-to-day stock market activities. By comparing the analysis results from BART and with time series method, BART can perform well and has better prediction capability than the traditional methods.
Keywords: Bayesian, Forecast, Stock, BART.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7333276 A Sequential Pattern Mining Method Based On Sequential Interestingness
Authors: Shigeaki Sakurai, Youichi Kitahara, Ryohei Orihara
Abstract:
Sequential mining methods efficiently discover all frequent sequential patterns included in sequential data. These methods use the support, which is the previous criterion that satisfies the Apriori property, to evaluate the frequency. However, the discovered patterns do not always correspond to the interests of analysts, because the patterns are common and the analysts cannot get new knowledge from the patterns. The paper proposes a new criterion, namely, the sequential interestingness, to discover sequential patterns that are more attractive for the analysts. The paper shows that the criterion satisfies the Apriori property and how the criterion is related to the support. Also, the paper proposes an efficient sequential mining method based on the proposed criterion. Lastly, the paper shows the effectiveness of the proposed method by applying the method to two kinds of sequential data.
Keywords: Sequential mining, Support, Confidence, Apriori property
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12753275 Abrupt Scene Change Detection
Authors: Priyadarshinee Adhikari, Neeta Gargote, Jyothi Digge, B.G. Hogade
Abstract:
A number of automated shot-change detection methods for indexing a video sequence to facilitate browsing and retrieval have been proposed in recent years. This paper emphasizes on the simulation of video shot boundary detection using one of the methods of the color histogram wherein scaling of the histogram metrics is an added feature. The difference between the histograms of two consecutive frames is evaluated resulting in the metrics. Further scaling of the metrics is performed to avoid ambiguity and to enable the choice of apt threshold for any type of videos which involves minor error due to flashlight, camera motion, etc. Two sample videos are used here with resolution of 352 X 240 pixels using color histogram approach in the uncompressed media. An attempt is made for the retrieval of color video. The simulation is performed for the abrupt change in video which yields 90% recall and precision value.Keywords: Abrupt change, color histogram, ground-truthing, precision, recall, scaling, threshold.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21003274 Comparison of Neural Network and Logistic Regression Methods to Predict Xerostomia after Radiotherapy
Authors: Hui-Min Ting, Tsair-Fwu Lee, Ming-Yuan Cho, Pei-Ju Chao, Chun-Ming Chang, Long-Chang Chen, Fu-Min Fang
Abstract:
To evaluate the ability to predict xerostomia after radiotherapy, we constructed and compared neural network and logistic regression models. In this study, 61 patients who completed a questionnaire about their quality of life (QoL) before and after a full course of radiation therapy were included. Based on this questionnaire, some statistical data about the condition of the patients’ salivary glands were obtained, and these subjects were included as the inputs of the neural network and logistic regression models in order to predict the probability of xerostomia. Seven variables were then selected from the statistical data according to Cramer’s V and point-biserial correlation values and were trained by each model to obtain the respective outputs which were 0.88 and 0.89 for AUC, 9.20 and 7.65 for SSE, and 13.7% and 19.0% for MAPE, respectively. These parameters demonstrate that both neural network and logistic regression methods are effective for predicting conditions of parotid glands.
Keywords: NPC, ANN, logistic regression, xerostomia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16353273 2 – Block 3 - Point Modified Numerov Block Methods for Solving Ordinary Differential Equations
Authors: Abdu Masanawa Sagir
Abstract:
In this paper, linear multistep technique using power series as the basis function is used to develop the block methods which are suitable for generating direct solution of the special second order ordinary differential equations of the form y′′ = f(x,y), a < = x < = b with associated initial or boundary conditions. The continuaous hybrid formulations enable us to differentiate and evaluate at some grids and off – grid points to obtain two different three discrete schemes, each of order (4,4,4)T, which were used in block form for parallel or sequential solutions of the problems. The computational burden and computer time wastage involved in the usual reduction of second order problem into system of first order equations are avoided by this approach. Furthermore, a stability analysis and efficiency of the block method are tested on linear and non-linear ordinary differential equations whose solutions are oscillatory or nearly periodic in nature, and the results obtained compared favourably with the exact solution.Keywords: Block Method, Hybrid, Linear Multistep Method, Self – starting, Special Second Order.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19483272 Evaluation of Physicochemical Pretreatment Methods on COD and Ammonia Removal from Landfill Leachate
Authors: M. Poveda, S. Lozecznik, J. Oleszkiewicz, Q. Yuan
Abstract:
The goal of this experiment is to evaluate the effectiveness of different leachate pre-treatment options in terms of COD and ammonia removal. This research focused on the evaluation of physical-chemical methods for pre-treatment of leachate that would be effective and rapid in order to satisfy the requirements of the sewer discharge by-laws. The four pre-treatment options evaluated were: air stripping, chemical coagulation, electrocoagulation and advanced oxidation with sodium ferrate. Chemical coagulation reported the best COD removal rate at 43%, compared to 18% for both air stripping and electro-coagulation, and 20% for oxidation with sodium ferrate. On the other hand, air stripping was far superior to the other treatment options in terms of ammonia removal with 86%. Oxidation with sodium ferrate reached only 16%, while chemical coagulation and electro-coagulation removed less than 10%. When combined, air stripping and chemical coagulation removed up to 50% COD and 85% ammonia.Keywords: Leachate pretreatment, air stripping, chemical coagulation, electro-coagulation, oxidation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19233271 Studying the Intercalation of Low Density Polyethylene/Clay Nanocomposites after Different UV Exposures
Authors: Samir Al-Zobaidi
Abstract:
This study attempts to understand the effect of different UV irradiation methods on the intercalation of LDPE/MMT nanocomposites, and its molecular behavior at certain isothermal crystallization temperature. Three different methods of UV exposure were employed using single composition of LDPE/MMT nanocomposites. All samples were annealed for 5 hours at a crystallization temperature of 100oC. The crystallization temperature was chosen to be at large supercooling temperature to ensure quick and complete crystallization. The raw material of LDPE consisted of two stable monoclinic and orthorhombic phases according to XRD results. The thermal behavior of both phases acted differently when UV exposure method was changed. The monoclinic phase was more dependent on the method used compared to the orthorhombic phase. The intercalation of clay, as well as, the non-isothermal crystallization temperature, has also shown a clear dependency on the type of UV exposure. A third phase that is thermally less stable was also observed. Its respond to UV irradiation was greater since it contains low molecular weight entities which make it more vulnerable to any UV exposure.Keywords: LDPE/MMt nanocomposites, crystallization, UV irradiation, intercalation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17143270 Comparison of Parameterization Methods in Recognizing Spoken Arabic Digits
Authors: Ali Ganoun
Abstract:
This paper proposes evaluation of sound parameterization methods in recognizing some spoken Arabic words, namely digits from zero to nine. Each isolated spoken word is represented by a single template based on a specific recognition feature, and the recognition is based on the Euclidean distance from those templates. The performance analysis of recognition is based on four parameterization features: the Burg Spectrum Analysis, the Walsh Spectrum Analysis, the Thomson Multitaper Spectrum Analysis and the Mel Frequency Cepstral Coefficients (MFCC) features. The main aim of this paper was to compare, analyze, and discuss the outcomes of spoken Arabic digits recognition systems based on the selected recognition features. The results acqired confirm that the use of MFCC features is a very promising method in recognizing Spoken Arabic digits.
Keywords: Speech Recognition, Spectrum Analysis, Burg Spectrum, Walsh Spectrum Analysis, Thomson Multitaper Spectrum, MFCC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15923269 Learning Object Interface Adapted to the Learner's Learning Style
Authors: Zenaide Carvalho da Silva, Leandro Rodrigues Ferreira, Andrey Ricardo Pimentel
Abstract:
Learning styles (LS) refer to the ways and forms that the student prefers to learn in the teaching and learning process. Each student has their own way of receiving and processing information throughout the learning process. Therefore, knowing their LS is important to better understand their individual learning preferences, and also, understand why the use of some teaching methods and techniques give better results with some students, while others it does not. We believe that knowledge of these styles enables the possibility of making propositions for teaching; thus, reorganizing teaching methods and techniques in order to allow learning that is adapted to the individual needs of the student. Adapting learning would be possible through the creation of online educational resources adapted to the style of the student. In this context, this article presents the structure of a learning object interface adaptation based on the LS. The structure created should enable the creation of the adapted learning object according to the student's LS and contributes to the increase of student’s motivation in the use of a learning object as an educational resource.
Keywords: Adaptation, interface, learning object, learning style.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9833268 Simulation of Complex-Shaped Particle Breakage Using the Discrete Element Method
Authors: Felix Platzer, Eric Fimbinger
Abstract:
In Discrete Element Method (DEM) simulations, the breakage behavior of particles can be simulated based on different principles. In the case of large, complex-shaped particles that show various breakage patterns depending on the scenario leading to the failure and often only break locally instead of fracturing completely, some of these principles do not lead to realistic results. The reason for this is that in said cases, the methods in question, such as the Particle Replacement Method (PRM) or Voronoi Fracture, replace the initial particle (that is intended to break) into several sub-particles when certain breakage criteria are reached, such as exceeding the fracture energy. That is why those methods are commonly used for the simulation of materials that fracture completely instead of breaking locally. That being the case, when simulating local failure, it is advisable to pre-build the initial particle from sub-particles that are bonded together. The dimensions of these sub-particles consequently define the minimum size of the fracture results. This structure of bonded sub-particles enables the initial particle to break at the location of the highest local loads – due to the failure of the bonds in those areas – with several sub-particle clusters being the result of the fracture, which can again also break locally. In this project, different methods for the generation and calibration of complex-shaped particle conglomerates using bonded particle modeling (BPM) to enable the ability to depict more realistic fracture behavior were evaluated based on the example of filter cake. The method that proved suitable for this purpose and which furthermore allows efficient and realistic simulation of breakage behavior of complex-shaped particles applicable to industrial-sized simulations is presented in this paper.
Keywords: Bonded particle model (BPM), DEM, filter cake, particle breakage, particle fracture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3983267 Application of the Central-Difference with Half- Sweep Gauss-Seidel Method for Solving First Order Linear Fredholm Integro-Differential Equations
Authors: E. Aruchunan, J. Sulaiman
Abstract:
The objective of this paper is to analyse the application of the Half-Sweep Gauss-Seidel (HSGS) method by using the Half-sweep approximation equation based on central difference (CD) and repeated trapezoidal (RT) formulas to solve linear fredholm integro-differential equations of first order. The formulation and implementation of the Full-Sweep Gauss-Seidel (FSGS) and Half- Sweep Gauss-Seidel (HSGS) methods are also presented. The HSGS method has been shown to rapid compared to the FSGS methods. Some numerical tests were illustrated to show that the HSGS method is superior to the FSGS method.Keywords: Integro-differential equations, Linear fredholm equations, Finite difference, Quadrature formulas, Half-Sweep iteration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18143266 Graph-Based Text Similarity Measurement by Exploiting Wikipedia as Background Knowledge
Authors: Lu Zhang, Chunping Li, Jun Liu, Hui Wang
Abstract:
Text similarity measurement is a fundamental issue in many textual applications such as document clustering, classification, summarization and question answering. However, prevailing approaches based on Vector Space Model (VSM) more or less suffer from the limitation of Bag of Words (BOW), which ignores the semantic relationship among words. Enriching document representation with background knowledge from Wikipedia is proven to be an effective way to solve this problem, but most existing methods still cannot avoid similar flaws of BOW in a new vector space. In this paper, we propose a novel text similarity measurement which goes beyond VSM and can find semantic affinity between documents. Specifically, it is a unified graph model that exploits Wikipedia as background knowledge and synthesizes both document representation and similarity computation. The experimental results on two different datasets show that our approach significantly improves VSM-based methods in both text clustering and classification.Keywords: Text classification, Text clustering, Text similarity, Wikipedia
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21163265 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis
Authors: A.K. Tangirala, S. Babji
Abstract:
In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15323264 Face Recognition with PCA and KPCA using Elman Neural Network and SVM
Authors: Hossein Esbati, Jalil Shirazi
Abstract:
In this paper, in order to categorize ORL database face pictures, principle Component Analysis (PCA) and Kernel Principal Component Analysis (KPCA) methods by using Elman neural network and Support Vector Machine (SVM) categorization methods are used. Elman network as a recurrent neural network is proposed for modeling storage systems and also it is used for reviewing the effect of using PCA numbers on system categorization precision rate and database pictures categorization time. Categorization stages are conducted with various components numbers and the obtained results of both Elman neural network categorization and support vector machine are compared. In optimum manner 97.41% recognition accuracy is obtained.Keywords: Face recognition, Principal Component Analysis, Kernel Principal Component Analysis, Neural network, Support Vector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19293263 Mechanical Properties of Pea Pods (Pisium sativum Var. Shamshiri)
Authors: M. Azadbakht, N. Tajari, R. Alimoradzade
Abstract:
Knowing pea pods mechanical resistance against dynamic forces are important for design of combine harvester. In pea combine harvesters, threshing is accomplished by two mechanical actions of impact and friction forces. In this research, the effects of initial moisture content and needed impact and friction energy on threshing of pea pods were studied. An impact device was built based on pendulum mechanism. The experiments were done at three initial moisture content levels of 12.1, 23.5 and 39.5 (%w.b.) for both impact and friction methods. Three energy levels of 0.088, 0.126 and 0.202 J were used for impact method and for friction method three energy levels of 0.784, 0.930 and 1.351 J. The threshing percentage was measured in each method. By using a frictional device, kinetic friction coefficients at above moisture contents were measured 0.257, 0.303 and 0.336, respectively. The results of variance analysis of the two methods showed that moisture content and energy have significant effects on the threshing percentage.
Keywords: Pea pod, Energy, Friction, Impact, Initial moisture content, Threshing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21133262 Shot Boundary Detection Using Octagon Square Search Pattern
Authors: J. Kavitha, S. Sowmyayani, P. Arockia Jansi Rani
Abstract:
In this paper, a shot boundary detection method is presented using octagon square search pattern. The color, edge, motion and texture features of each frame are extracted and used in shot boundary detection. The motion feature is extracted using octagon square search pattern. Then, the transition detection method is capable of detecting the shot or non-shot boundaries in the video using the feature weight values. Experimental results are evaluated in TRECVID video test set containing various types of shot transition with lighting effects, object and camera movement within the shots. Further, this paper compares the experimental results of the proposed method with existing methods. It shows that the proposed method outperforms the state-of-art methods for shot boundary detection.
Keywords: Content-based indexing and retrieval, cut transition detection, discrete wavelet transform, shot boundary detection, video source.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1000