Search results for: Mathematical Models.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3270

Search results for: Mathematical Models.

3120 An Adaptive Hand-Talking System for the Hearing Impaired

Authors: Zhou Yu, Jiang Feng

Abstract:

An adaptive Chinese hand-talking system is presented in this paper. By analyzing the 3 data collecting strategies for new users, the adaptation framework including supervised and unsupervised adaptation methods is proposed. For supervised adaptation, affinity propagation (AP) is used to extract exemplar subsets, and enhanced maximum a posteriori / vector field smoothing (eMAP/VFS) is proposed to pool the adaptation data among different models. For unsupervised adaptation, polynomial segment models (PSMs) are used to help hidden Markov models (HMMs) to accurately label the unlabeled data, then the "labeled" data together with signerindependent models are inputted to MAP algorithm to generate signer-adapted models. Experimental results show that the proposed framework can execute both supervised adaptation with small amount of labeled data and unsupervised adaptation with large amount of unlabeled data to tailor the original models, and both achieve improvements on the performance of recognition rate.

Keywords: sign language recognition, signer adaptation, eMAP/VFS, polynomial segment model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1731
3119 Efficient Large Numbers Karatsuba-Ofman Multiplier Designs for Embedded Systems

Authors: M.Machhout, M.Zeghid, W.El hadj youssef, B.Bouallegue, A.Baganne, R.Tourki

Abstract:

Long number multiplications (n ≥ 128-bit) are a primitive in most cryptosystems. They can be performed better by using Karatsuba-Ofman technique. This algorithm is easy to parallelize on workstation network and on distributed memory, and it-s known as the practical method of choice. Multiplying long numbers using Karatsuba-Ofman algorithm is fast but is highly recursive. In this paper, we propose different designs of implementing Karatsuba-Ofman multiplier. A mixture of sequential and combinational system design techniques involving pipelining is applied to our proposed designs. Multiplying large numbers can be adapted flexibly to time, area and power criteria. Computationally and occupation constrained in embedded systems such as: smart cards, mobile phones..., multiplication of finite field elements can be achieved more efficiently. The proposed designs are compared to other existing techniques. Mathematical models (Area (n), Delay (n)) of our proposed designs are also elaborated and evaluated on different FPGAs devices.

Keywords: finite field, Karatsuba-Ofman, long numbers, multiplication, mathematical model, recursivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2507
3118 EEG Correlates of Trait and Mathematical Anxiety during Lexical and Numerical Error-Recognition Tasks

Authors: Alexander N. Savostyanov, Tatiana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Tatiana A. Golovko, Yulia V. Kovas

Abstract:

EEG correlates of mathematical and trait anxiety level were studied in 52 healthy Russian-speakers during execution of error-recognition tasks with lexical, arithmetic and algebraic conditions. Event-related spectral perturbations were used as a measure of brain activity. The ERSP plots revealed alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three conditions. The correlates of anxiety were found in theta (4-8 Hz) and beta2 (16- 20 Hz) frequency bands. In theta band the effects of mathematical anxiety were stronger expressed in lexical, than in arithmetic and algebraic condition. The mathematical anxiety effects in theta band were associated with differences between anterior and posterior cortical areas, whereas the effects of trait anxiety were associated with inter-hemispherical differences. In beta1 and beta2 bands effects of trait and mathematical anxiety were directed oppositely. The trait anxiety was associated with increase of amplitude of desynchronization, whereas the mathematical anxiety was associated with decrease of this amplitude. The effect of mathematical anxiety in beta2 band was insignificant for lexical condition but was the strongest in algebraic condition. EEG correlates of anxiety in theta band could be interpreted as indexes of task emotionality, whereas the reaction in beta2 band is related to tension of intellectual resources.

Keywords: EEG, brain activity, lexical and numerical error-recognition tasks, mathematical and trait anxiety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1912
3117 Assessment of Modern RANS Models for the C3X Vane Film Cooling Prediction

Authors: Mikhail Gritskevich, Sebastian Hohenstein

Abstract:

The paper presents the results of a detailed assessment of several modern Reynolds Averaged Navier-Stokes (RANS) turbulence models for prediction of C3X vane film cooling at various injection regimes. Three models are considered, namely the Shear Stress Transport (SST) model, the modification of the SST model accounting for the streamlines curvature (SST-CC), and the Explicit Algebraic Reynolds Stress Model (EARSM). It is shown that all the considered models face with a problem in prediction of the adiabatic effectiveness in the vicinity of the cooling holes; however, accounting for the Reynolds stress anisotropy within the EARSM model noticeably increases the solution accuracy. On the other hand, further downstream all the models provide a reasonable agreement with the experimental data for the adiabatic effectiveness and among the considered models the most accurate results are obtained with the use EARMS.

Keywords: Discrete holes film cooling, Reynolds Averaged Navier-Stokes, Reynolds stress tensor anisotropy, turbulent heat transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1089
3116 Using Reservoir Models for Monitoring Geothermal Surface Features

Authors: John P. O’Sullivan, Thomas M. P. Ratouis, Michael J. O’Sullivan

Abstract:

As the use of geothermal energy grows internationally more effort is required to monitor and protect areas with rare and important geothermal surface features. A number of approaches are presented for developing and calibrating numerical geothermal reservoir models that are capable of accurately representing geothermal surface features. The approaches are discussed in the context of cases studies of the Rotorua geothermal system and the Orakei-korako geothermal system, both of which contain important surface features. The results show that models are able to match the available field data accurately and hence can be used as valuable tools for predicting the future response of the systems to changes in use.

Keywords: Geothermal reservoir models, surface features, monitoring, TOUGH2.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2045
3115 Volatility Model with Markov Regime Switching to Forecast Baht/USD

Authors: N. Sopipan, A. Intarasit, K. Chuarkham

Abstract:

 In this paper, we forecast the volatility of Baht/USDs using Markov Regime Switching GARCH (MRS-GARCH) models. These models allow volatility to have different dynamics according to unobserved regime variables. The main purpose of this paper is to find out whether MRS-GARCH models are an improvement on the GARCH type models in terms of modeling and forecasting Baht/USD volatility. The MRS-GARCH is the best performance model for Baht/USD volatility in short term but the GARCH model is best perform for long term.

Keywords: Volatility, Markov Regime Switching, Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1914
3114 Allometric Models for Biomass Estimation in Savanna Woodland Area, Niger State, Nigeria

Authors: Abdullahi Jibrin, Aishetu Abdulkadir

Abstract:

The development of allometric models is crucial to accurate forest biomass/carbon stock assessment. The aim of this study was to develop a set of biomass prediction models that will enable the determination of total tree aboveground biomass for savannah woodland area in Niger State, Nigeria. Based on the data collected through biometric measurements of 1816 trees and destructive sampling of 36 trees, five species specific and one site specific models were developed. The sample size was distributed equally between the five most dominant species in the study site (Vitellaria paradoxa, Irvingia gabonensis, Parkia biglobosa, Anogeissus leiocarpus, Pterocarpus erinaceous). Firstly, the equations were developed for five individual species. Secondly these five species were mixed and were used to develop an allometric equation of mixed species. Overall, there was a strong positive relationship between total tree biomass and the stem diameter. The coefficient of determination (R2 values) ranging from 0.93 to 0.99 P < 0.001 were realised for the models; with considerable low standard error of the estimates (SEE) which confirms that the total tree above ground biomass has a significant relationship with the dbh. F-test values for the biomass prediction models were also significant at p < 0.001 which indicates that the biomass prediction models are valid. This study recommends that for improved biomass estimates in the study site, the site specific biomass models should preferably be used instead of using generic models.

Keywords: Allometriy, biomass, carbon stock, model, regression equation, woodland, inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2760
3113 Development of a Real-Time Energy Models for Photovoltaic Water Pumping System

Authors: Ammar Mahjoubi, Ridha Fethi Mechlouch, Belgacem Mahdhaoui, Ammar Ben Brahim

Abstract:

This purpose of this paper is to develop and validate a model to accurately predict the cell temperature of a PV module that adapts to various mounting configurations, mounting locations, and climates while only requiring readily available data from the module manufacturer. Results from this model are also compared to results from published cell temperature models. The models were used to predict real-time performance from a PV water pumping systems in the desert of Medenine, south of Tunisia using 60-min intervals of measured performance data during one complete year. Statistical analysis of the predicted results and measured data highlight possible sources of errors and the limitations and/or adequacy of existing models, to describe the temperature and efficiency of PV-cells and consequently, the accuracy of performance of PV water pumping systems prediction models.

Keywords: Temperature of a photovoltaic module, Predicted models, PV water pumping systems efficiency, Simulation, Desert of southern Tunisia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
3112 Comparison of Material Constitutive Models Used in FEA of Low Volume Roads

Authors: Lenka Ševelová, Aleš Florian

Abstract:

Appropriate and progressive tool for analyzing behavior of low volume roads are probabilistic models used in reliability analyses. The necessary part of the probabilistic model is the deterministic model of structural behavior. The FE model of low volume roads is created in the ANSYS software. It is able to determine the state of stress and deformation in any point of the structure and thus generate data required for the reliability analysis. The paper compares two material constitutive models used for modeling of unbound non-homogenous materials used in low volume roads. The first model is linear elastic model according to Hook theory (H model), the second one is nonlinear elastic-plastic Drucker-Prager model (D-P model).

Keywords: FEA, FEM, geotechnical materials, low volume roads, material constitutive models, pavement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2854
3111 Probabilistic Electrical Power Generation Modeling Using Decimal to Binary Conversion

Authors: Ahmed S. Al-Abdulwahab

Abstract:

Generation system reliability assessment is an important task which can be performed using deterministic or probabilistic techniques. The probabilistic approaches have significant advantages over the deterministic methods. However, more complicated modeling is required by the probabilistic approaches. Power generation model is a basic requirement for this assessment. One form of the generation models is the well known capacity outage probability table (COPT). Different analytical techniques have been used to construct the COPT. These approaches require considerable mathematical modeling of the generating units. The unit-s models are combined to build the COPT which will add more burdens on the process of creating the COPT. Decimal to Binary Conversion (DBC) technique is widely and commonly applied in electronic systems and computing This paper proposes a novel utilization of the DBC to create the COPT without engaging in analytical modeling or time consuming simulations. The simple binary representation , “0 " and “1 " is used to model the states o f generating units. The proposed technique is proven to be an effective approach to build the generation model.

Keywords: Decimal to Binary, generation, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016
3110 Linking Business Process Models and System Models Based on Business Process Modelling

Authors: Faisal A. Aburub

Abstract:

Organizations today need to invest in software in order to run their businesses, and to the organizations’ objectives, the software should be in line with the business process. This research presents an approach for linking process models and system models. Particularly, the new approach aims to synthesize sequence diagram based on role activity diagram (RAD) model. The approach includes four steps namely: Create business process model using RAD, identify computerized activities, identify entities in sequence diagram and identify messages in sequence diagram. The new approach has been validated using the process of student registration in University of Petra as a case study. Further research is required to validate the new approach using different domains.

Keywords: Business process modelling, system models, role activity diagrams, sequence diagrams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507
3109 Inverse Matrix in the Theory of Dynamic Systems

Authors: R. Masarova, M. Juhas, B. Juhasova, Z. Sutova

Abstract:

In dynamic system theory a mathematical model is often used to describe their properties. In order to find a transfer matrix of a dynamic system we need to calculate an inverse matrix. The paper contains the fusion of the classical theory and the procedures used in the theory of automated control for calculating the inverse matrix. The final part of the paper models the given problem by the Matlab.

Keywords: Dynamic system, transfer matrix, inverse matrix, modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2389
3108 Covering-based Rough sets Based on the Refinement of Covering-element

Authors: Jianguo Tang, Kun She, William Zhu

Abstract:

Covering-based rough sets is an extension of rough sets and it is based on a covering instead of a partition of the universe. Therefore it is more powerful in describing some practical problems than rough sets. However, by extending the rough sets, covering-based rough sets can increase the roughness of each model in recognizing objects. How to obtain better approximations from the models of a covering-based rough sets is an important issue. In this paper, two concepts, determinate elements and indeterminate elements in a universe, are proposed and given precise definitions respectively. This research makes a reasonable refinement of the covering-element from a new viewpoint. And the refinement may generate better approximations of covering-based rough sets models. To prove the theory above, it is applied to eight major coveringbased rough sets models which are adapted from other literature. The result is, in all these models, the lower approximation increases effectively. Correspondingly, in all models, the upper approximation decreases with exceptions of two models in some special situations. Therefore, the roughness of recognizing objects is reduced. This research provides a new approach to the study and application of covering-based rough sets.

Keywords: Determinate element, indeterminate element, refinementof covering-element, refinement of covering, covering-basedrough sets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302
3107 Bridging the Mental Gap between Convolution Approach and Compartmental Modeling in Functional Imaging: Typical Embedding of an Open Two-Compartment Model into the Systems Theory Approach of Indicator Dilution Theory

Authors: Gesine Hellwig

Abstract:

Functional imaging procedures for the non-invasive assessment of tissue microcirculation are highly requested, but require a mathematical approach describing the trans- and intercapillary passage of tracer particles. Up to now, two theoretical, for the moment different concepts have been established for tracer kinetic modeling of contrast agent transport in tissues: pharmacokinetic compartment models, which are usually written as coupled differential equations, and the indicator dilution theory, which can be generalized in accordance with the theory of lineartime- invariant (LTI) systems by using a convolution approach. Based on mathematical considerations, it can be shown that also in the case of an open two-compartment model well-known from functional imaging, the concentration-time course in tissue is given by a convolution, which allows a separation of the arterial input function from a system function being the impulse response function, summarizing the available information on tissue microcirculation. Due to this reason, it is possible to integrate the open two-compartment model into the system-theoretic concept of indicator dilution theory (IDT) and thus results known from IDT remain valid for the compartment approach. According to the long number of applications of compartmental analysis, even for a more general context similar solutions of the so-called forward problem can already be found in the extensively available appropriate literature of the seventies and early eighties. Nevertheless, to this day, within the field of biomedical imaging – not from the mathematical point of view – there seems to be a trench between both approaches, which the author would like to get over by exemplary analysis of the well-known model.

Keywords: Functional imaging, Tracer kinetic modeling, LTIsystem, Indicator dilution theory / convolution approach, Two-Compartment model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
3106 A Comparison of Different Soft Computing Models for Credit Scoring

Authors: Nnamdi I. Nwulu, Shola G. Oroja

Abstract:

It has become crucial over the years for nations to improve their credit scoring methods and techniques in light of the increasing volatility of the global economy. Statistical methods or tools have been the favoured means for this; however artificial intelligence or soft computing based techniques are becoming increasingly preferred due to their proficient and precise nature and relative simplicity. This work presents a comparison between Support Vector Machines and Artificial Neural Networks two popular soft computing models when applied to credit scoring. Amidst the different criteria-s that can be used for comparisons; accuracy, computational complexity and processing times are the selected criteria used to evaluate both models. Furthermore the German credit scoring dataset which is a real world dataset is used to train and test both developed models. Experimental results obtained from our study suggest that although both soft computing models could be used with a high degree of accuracy, Artificial Neural Networks deliver better results than Support Vector Machines.

Keywords: Artificial Neural Networks, Credit Scoring, SoftComputing Models, Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110
3105 Dynamic Analysis of Nonlinear Models with Infinite Extension by Boundary Elements

Authors: Delfim Soares Jr., Webe J. Mansur

Abstract:

The Time-Domain Boundary Element Method (TDBEM) is a well known numerical technique that handles quite properly dynamic analyses considering infinite dimension media. However, when these analyses are also related to nonlinear behavior, very complex numerical procedures arise considering the TD-BEM, which may turn its application prohibitive. In order to avoid this drawback and model nonlinear infinite media, the present work couples two BEM formulations, aiming to achieve the best of two worlds. In this context, the regions expected to behave nonlinearly are discretized by the Domain Boundary Element Method (D-BEM), which has a simpler mathematical formulation but is unable to deal with infinite domain analyses; the TD-BEM is employed as in the sense of an effective non-reflexive boundary. An iterative procedure is considered for the coupling of the TD-BEM and D-BEM, which is based on a relaxed renew of the variables at the common interfaces. Elastoplastic models are focused and different time-steps are allowed to be considered by each BEM formulation in the coupled analysis.

Keywords: Boundary Element Method, Dynamic Elastoplastic Analysis, Iterative Coupling, Multiple Time-Steps.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1515
3104 In silico Simulations for DNA Shuffling Experiments

Authors: Luciana Montera

Abstract:

DNA shuffling is a powerful method used for in vitro evolute molecules with specific functions and has application in areas such as, for example, pharmaceutical, medical and agricultural research. The success of such experiments is dependent on a variety of parameters and conditions that, sometimes, can not be properly pre-established. Here, two computational models predicting DNA shuffling results is presented and their use and results are evaluated against an empirical experiment. The in silico and in vitro results show agreement indicating the importance of these two models and motivating the study and development of new models.

Keywords: Computer simulation, DNA shuffling, in silico andin vitro comparison.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701
3103 Comparison of Machine Learning Techniques for Single Imputation on Audiograms

Authors: Sarah Beaver, Renee Bryce

Abstract:

Audiograms detect hearing impairment, but missing values pose problems. This work explores imputations in an attempt to improve accuracy. This work implements Linear Regression, Lasso, Linear Support Vector Regression, Bayesian Ridge, K Nearest Neighbors (KNN), and Random Forest machine learning techniques to impute audiogram frequencies ranging from 125 Hz to 8000 Hz. The data contain patients who had or were candidates for cochlear implants. Accuracy is compared across two different Nested Cross-Validation k values. Over 4000 audiograms were used from 800 unique patients. Additionally, training on data combines and compares left and right ear audiograms versus single ear side audiograms. The accuracy achieved using Root Mean Square Error (RMSE) values for the best models for Random Forest ranges from 4.74 to 6.37. The R2 values for the best models for Random Forest ranges from .91 to .96. The accuracy achieved using RMSE values for the best models for KNN ranges from 5.00 to 7.72. The R2 values for the best models for KNN ranges from .89 to .95. The best imputation models received R2 between .89 to .96 and RMSE values less than 8dB. We also show that the accuracy of classification predictive models performed better with our imputation models versus constant imputations by a two percent increase.

Keywords: Machine Learning, audiograms, data imputations, single imputations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93
3102 Simulating and Forecasting Qualitative Marcoeconomic Models Using Rule-Based Fuzzy Cognitive Maps

Authors: Spiros Mazarakis, George Matzavinos, Peter P. Groumpos

Abstract:

Economic models are complex dynamic systems with a lot of uncertainties and fuzzy data. Conventional modeling approaches using well known methods and techniques cannot provide realistic and satisfactory answers to today-s challenging economic problems. Qualitative modeling using fuzzy logic and intelligent system theories can be used to model macroeconomic models. Fuzzy Cognitive maps (FCM) is a new method been used to model the dynamic behavior of complex systems. For the first time FCMs and the Mamdani Model of Intelligent control is used to model macroeconomic models. This new model is referred as the Mamdani Rule-Based Fuzzy Cognitive Map (MBFCM) and provides the academic and research community with a new promising integrated advanced computational model. A new economic model is developed for a qualitative approach to Macroeconomic modeling. Fuzzy Controllers for such models are designed. Simulation results for an economic scenario are provided and extensively discussed

Keywords: Macroeconomic Models, Mamdani Rule Based- FCMs(MBFCMs), Qualitative and Dynamics System, Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1874
3101 Some Solid Transportation Models with Crisp and Rough Costs

Authors: Pradip Kundu, Samarjit Kar, Manoranjan Maiti

Abstract:

In this paper, some practical solid transportation models are formulated considering per trip capacity of each type of conveyances with crisp and rough unit transportation costs. This is applicable for the system in which full vehicles, e.g. trucks, rail coaches are to be booked for transportation of products so that transportation cost is determined on the full of the conveyances. The models with unit transportation costs as rough variables are transformed into deterministic forms using rough chance constrained programming with the help of trust measure. Numerical examples are provided to illustrate the proposed models in crisp environment as well as with unit transportation costs as rough variables.

Keywords: Solid transportation problem, Rough set, Rough variable, Trust measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2598
3100 Generalized Mathematical Description and Simulation of Grid-Tied Thyristor Converters

Authors: V. S. Klimash, Ye Min Thu

Abstract:

Thyristor rectifiers, inverters grid-tied, and AC voltage regulators are widely used in industry, and on electrified transport, they have a lot in common both in the power circuit and in the control system. They have a common mathematical structure and switching processes. At the same time, the rectifier, but the inverter units and thyristor regulators of alternating voltage are considered separately both theoretically and practically. They are written about in different books as completely different devices. The aim of this work is to combine them into one class based on the unity of the equations describing electromagnetic processes, and then, to show this unity on the mathematical model and experimental setup. Based on research from mathematics to the product, a conclusion is made about the methodology for the rapid conduct of research and experimental design work, preparation for production and serial production of converters with a unified bundle. In recent years, there has been a transition from thyristor circuits and transistor in modular design. Showing the example of thyristor rectifiers and AC voltage regulators, we can conclude that there is a unity of mathematical structures and grid-tied thyristor converters.

Keywords: Direct current, alternating current, rectifier, AC voltage regulator, generalized mathematical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 982
3099 Mathematical Modeling of Storm Surge in Three Dimensional Primitive Equations

Authors: Worachat Wannawong, Usa W. HumphriesPrungchan Wongwises, Suphat Vongvisessomjai

Abstract:

The mathematical modeling of storm surge in sea and coastal regions such as the South China Sea (SCS) and the Gulf of Thailand (GoT) are important to study the typhoon characteristics. The storm surge causes an inundation at a lateral boundary exhibiting in the coastal zones particularly in the GoT and some part of the SCS. The model simulations in the three dimensional primitive equations with a high resolution model are important to protect local properties and human life from the typhoon surges. In the present study, the mathematical modeling is used to simulate the typhoon–induced surges in three case studies of Typhoon Linda 1997. The results of model simulations at the tide gauge stations can describe the characteristics of storm surges at the coastal zones.

Keywords: lateral boundary, mathematical modeling, numericalsimulations, three dimensional primitive equations, storm surge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3401
3098 Low-Cost and Highly Accurate Motion Models for Three-Dimensional Local Landmark-based Autonomous Navigation

Authors: Gheorghe Galben, Daniel N. Aloi

Abstract:

Recently, the Spherical Motion Models (SMM-s) have been introduced [1]. These new models have been developed for 3D local landmark-base Autonomous Navigation (AN). This paper is revealing new arguments and experimental results to support the SMM-s characteristics. The accuracy and the robustness in performing a specific task are the main concerns of the new investigations. To analyze their performances of the SMM-s, the most powerful tools of estimation theory, the extended Kalman filter (EKF) and unscented Kalman filter (UKF), which give the best estimations in noisy environments, have been employed. The Monte Carlo validation implementations used to test the stability and robustness of the models have been employed as well.

Keywords: Autonomous navigation, extended kalman filter, unscented kalman filter, localization algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1292
3097 Statistical (Radio) Path Loss Modelling: For RF Propagations within localized Indoor and Outdoor Environments of the Academic Building of INTI University College (Laureate International Universities)

Authors: Emmanuel O.O. Ojakominor, Tian F. Lai

Abstract:

A handful of propagation textbooks that discuss radio frequency (RF) propagation models merely list out the models and perhaps discuss them rather briefly; this may well be frustrating for the potential first time modeller who's got no idea on how these models could have been derived. This paper fundamentally provides an overture in modelling the radio channel. Explicitly, for the modelling practice discussed here, signal strength field measurements had to be conducted beforehand (this was done at 469 MHz); to be precise, this paper primarily concerns empirically/statistically modelling the radio channel, and thus provides results obtained from empirically modelling the environments in question. This paper, on the whole, proposes three propagation models, corresponding to three experimented environments. Perceptibly, the models have been derived by way of making the most use of statistical measures. Generally speaking, the first two models were derived via simple linear regression analysis, whereas the third have been originated using multiple regression analysis (with five various predictors). Additionally, as implied by the title of this paper, both indoor and outdoor environments have been experimented; however, (somewhat) two of the environments are neither entirely indoor nor entirely outdoor. The other environment, however, is completely indoor.

Keywords: RF propagation, radio channel modelling, statistical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2396
3096 Mixtures of Monotone Networks for Prediction

Authors: Marina Velikova, Hennie Daniels, Ad Feelders

Abstract:

In many data mining applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. In this paper we consider partially monotone prediction problems, where the target variable depends monotonically on some of the input variables but not on all. We propose a novel method to construct prediction models, where monotone dependences with respect to some of the input variables are preserved by virtue of construction. Our method belongs to the class of mixture models. The basic idea is to convolute monotone neural networks with weight (kernel) functions to make predictions. By using simulation and real case studies, we demonstrate the application of our method. To obtain sound assessment for the performance of our approach, we use standard neural networks with weight decay and partially monotone linear models as benchmark methods for comparison. The results show that our approach outperforms partially monotone linear models in terms of accuracy. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.

Keywords: mixture models, monotone neural networks, partially monotone models, partially monotone problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1221
3095 Analyzing Data on Breastfeeding Using Dispersed Statistical Models

Authors: Naushad Mamode Khan, Cheika Jahangeer, Maleika Heenaye-Mamode Khan

Abstract:

Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. Exclusive breastfeeding during the first 6 months of life is very important as it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, it helps to reduce the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we make a survey of the factors that influence exclusive breastfeeding and use two dispersed statistical models to analyze data. The models are the Generalized Poisson regression model and the Com-Poisson regression models.

Keywords: Exclusive breastfeeding, regression model, generalized poisson, com-poisson.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539
3094 Zero Truncated Strict Arcsine Model

Authors: Y. N. Phang, E. F. Loh

Abstract:

The zero truncated model is usually used in modeling count data without zero. It is the opposite of zero inflated model. Zero truncated Poisson and zero truncated negative binomial models are discussed and used by some researchers in analyzing the abundance of rare species and hospital stay. Zero truncated models are used as the base in developing hurdle models. In this study, we developed a new model, the zero truncated strict arcsine model, which can be used as an alternative model in modeling count data without zero and with extra variation. Two simulated and one real life data sets are used and fitted into this developed model. The results show that the model provides a good fit to the data. Maximum likelihood estimation method is used in estimating the parameters.

Keywords: Hurdle models, maximum likelihood estimation method, positive count data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833
3093 Establishment of Kinetic Zone Diagrams via Simulated Linear Sweep Voltammograms for Soluble-Insoluble Systems

Authors: Imene Atek, Abed M. Affoune, Hubert Girault, Pekka Peljo

Abstract:

Due to the need for a rigorous mathematical model that can help to estimate kinetic properties for soluble-insoluble systems, through voltammetric experiments, a Nicholson Semi Analytical Approach was used in this work for modeling and prediction of theoretical linear sweep voltammetry responses for reversible, quasi reversible or irreversible electron transfer reactions. The redox system of interest is a one-step metal electrodeposition process. A rigorous analysis of simulated linear scan voltammetric responses following variation of dimensionless factors, the rate constant and charge transfer coefficients in a broad range was studied and presented in the form of the so called kinetic zones diagrams. These kinetic diagrams were divided into three kinetics zones. Interpreting these zones leads to empirical mathematical models which can allow the experimenter to determine electrodeposition reactions kinetics whatever the degree of reversibility. The validity of the obtained results was tested and an excellent experiment–theory agreement has been showed.

Keywords: Electrodeposition, kinetics diagrams, modeling, voltammetry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 774
3092 Image Segmentation and Contour Recognition Based on Mathematical Morphology

Authors: Pinaki Pratim Acharjya, Esha Dutta

Abstract:

In image segmentation contour detection is one of the important pre-processing steps in recent days. Contours characterize boundaries and contour detection is one of the most difficult tasks in image processing. Hence it is a problem of fundamental importance in image processing. Contour detection of an image decreases the volume of data considerably and useless information is removed, but the structural properties of the image remain same. In this research, a robust and effective contour detection technique has been proposed using mathematical morphology. Three different contour detection results are obtained by using morphological dilation and erosion. The comparative analyses of three different results also have been done.

Keywords: Image segmentation, contour detection, mathematical morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388
3091 AnQL: A Query Language for Annotation Documents

Authors: Neerja Bhatnagar, Ben A. Juliano, Renee S. Renner

Abstract:

This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.

Keywords: Annotation query language, data annotations, data annotation models, semantic data annotations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787