Search results for: polynomial equation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1295

Search results for: polynomial equation

695 Allometric Models for Biomass Estimation in Savanna Woodland Area, Niger State, Nigeria

Authors: Abdullahi Jibrin, Aishetu Abdulkadir

Abstract:

The development of allometric models is crucial to accurate forest biomass/carbon stock assessment. The aim of this study was to develop a set of biomass prediction models that will enable the determination of total tree aboveground biomass for savannah woodland area in Niger State, Nigeria. Based on the data collected through biometric measurements of 1816 trees and destructive sampling of 36 trees, five species specific and one site specific models were developed. The sample size was distributed equally between the five most dominant species in the study site (Vitellaria paradoxa, Irvingia gabonensis, Parkia biglobosa, Anogeissus leiocarpus, Pterocarpus erinaceous). Firstly, the equations were developed for five individual species. Secondly these five species were mixed and were used to develop an allometric equation of mixed species. Overall, there was a strong positive relationship between total tree biomass and the stem diameter. The coefficient of determination (R2 values) ranging from 0.93 to 0.99 P < 0.001 were realised for the models; with considerable low standard error of the estimates (SEE) which confirms that the total tree above ground biomass has a significant relationship with the dbh. F-test values for the biomass prediction models were also significant at p < 0.001 which indicates that the biomass prediction models are valid. This study recommends that for improved biomass estimates in the study site, the site specific biomass models should preferably be used instead of using generic models.

Keywords: Allometriy, biomass, carbon stock, model, regression equation, woodland, inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2774
694 Peakwise Smoothing of Data Models using Wavelets

Authors: D Sudheer Reddy, N Gopal Reddy, P V Radhadevi, J Saibaba, Geeta Varadan

Abstract:

Smoothing or filtering of data is first preprocessing step for noise suppression in many applications involving data analysis. Moving average is the most popular method of smoothing the data, generalization of this led to the development of Savitzky-Golay filter. Many window smoothing methods were developed by convolving the data with different window functions for different applications; most widely used window functions are Gaussian or Kaiser. Function approximation of the data by polynomial regression or Fourier expansion or wavelet expansion also gives a smoothed data. Wavelets also smooth the data to great extent by thresholding the wavelet coefficients. Almost all smoothing methods destroys the peaks and flatten them when the support of the window is increased. In certain applications it is desirable to retain peaks while smoothing the data as much as possible. In this paper we present a methodology called as peak-wise smoothing that will smooth the data to any desired level without losing the major peak features.

Keywords: smoothing, moving average, peakwise smoothing, spatialdensity models, planar shape models, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742
693 Suspended Matter Model on Alsat-1 Image by MLP Network and Mathematical Morphology: Prototypes by K-Means

Authors: S. Loumi, H. Merrad, F. Alilat, B. Sansal

Abstract:

In this article, we propose a methodology for the characterization of the suspended matter along Algiers-s bay. An approach by multi layers perceptron (MLP) with training by back propagation of the gradient optimized by the algorithm of Levenberg Marquardt (LM) is used. The accent was put on the choice of the components of the base of training where a comparative study made for four methods: Random and three alternatives of classification by K-Means. The samples are taken from suspended matter image, obtained by analytical model based on polynomial regression by taking account of in situ measurements. The mask which selects the zone of interest (water in our case) was carried out by using a multi spectral classification by ISODATA algorithm. To improve the result of classification, a cleaning of this mask was carried out using the tools of mathematical morphology. The results of this study presented in the forms of curves, tables and of images show the founded good of our methodology.

Keywords: Classification K-means, mathematical morphology, neural network MLP, remote sensing, suspended particulate matter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
692 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment

Authors: Pedro Llanos, Diego García

Abstract:

This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.

Keywords: Altitude sickness, cabin pressure, hypobaric chamber training, symptoms and altitude, slow onset hypoxia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 401
691 Generalized π-Armendariz Authentication Cryptosystem

Authors: Areej M. Abduldaim, Nadia M. G. Al-Saidi

Abstract:

Algebra is one of the important fields of mathematics. It concerns with the study and manipulation of mathematical symbols. It also concerns with the study of abstractions such as groups, rings, and fields. Due to the development of these abstractions, it is extended to consider other structures, such as vectors, matrices, and polynomials, which are non-numerical objects. Computer algebra is the implementation of algebraic methods as algorithms and computer programs. Recently, many algebraic cryptosystem protocols are based on non-commutative algebraic structures, such as authentication, key exchange, and encryption-decryption processes are adopted. Cryptography is the science that aimed at sending the information through public channels in such a way that only an authorized recipient can read it. Ring theory is the most attractive category of algebra in the area of cryptography. In this paper, we employ the algebraic structure called skew -Armendariz rings to design a neoteric algorithm for zero knowledge proof. The proposed protocol is established and illustrated through numerical example, and its soundness and completeness are proved.

Keywords: Cryptosystem, identification, skew π-Armendariz rings, skew polynomial rings, zero knowledge protocol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 785
690 Increasing The Speed of Convergence of an Artificial Neural Network based ARMA Coefficients Determination Technique

Authors: Abiodun M. Aibinu, Momoh J. E. Salami, Amir A. Shafie, Athaur Rahman Najeeb

Abstract:

In this paper, novel techniques in increasing the accuracy and speed of convergence of a Feed forward Back propagation Artificial Neural Network (FFBPNN) with polynomial activation function reported in literature is presented. These technique was subsequently used to determine the coefficients of Autoregressive Moving Average (ARMA) and Autoregressive (AR) system. The results obtained by introducing sequential and batch method of weight initialization, batch method of weight and coefficient update, adaptive momentum and learning rate technique gives more accurate result and significant reduction in convergence time when compared t the traditional method of back propagation algorithm, thereby making FFBPNN an appropriate technique for online ARMA coefficient determination.

Keywords: Adaptive Learning rate, Adaptive momentum, Autoregressive, Modeling, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
689 Color Image Edge Detection using Pseudo-Complement and Matrix Operations

Authors: T. N. Janakiraman, P. V. S. S. R. Chandra Mouli

Abstract:

A color image edge detection algorithm is proposed in this paper using Pseudo-complement and matrix rotation operations. First, pseudo-complement method is applied on the image for each channel. Then, matrix operations are applied on the output image of the first stage. Dominant pixels are obtained by image differencing between the pseudo-complement image and the matrix operated image. Median filtering is carried out to smoothen the image thereby removing the isolated pixels. Finally, the dominant or core pixels occurring in at least two channels are selected. On plotting the selected edge pixels, the final edge map of the given color image is obtained. The algorithm is also tested in HSV and YCbCr color spaces. Experimental results on both synthetic and real world images show that the accuracy of the proposed method is comparable to other color edge detectors. All the proposed procedures can be applied to any image domain and runs in polynomial time.

Keywords: Color edge detection, dominant pixels, matrixrotation/shift operations, pseudo-complement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2324
688 Discrete Estimation of Spectral Density for Alpha Stable Signals Observed with an Additive Error

Authors: R. Sabre, W. Horrigue, J. C. Simon

Abstract:

This paper is interested in two difficulties encountered in practice when observing a continuous time process. The first is that we cannot observe a process over a time interval; we only take discrete observations. The second is the process frequently observed with a constant additive error. It is important to give an estimator of the spectral density of such a process taking into account the additive observation error and the choice of the discrete observation times. In this work, we propose an estimator based on the spectral smoothing of the periodogram by the polynomial Jackson kernel reducing the additive error. In order to solve the aliasing phenomenon, this estimator is constructed from observations taken at well-chosen times so as to reduce the estimator to the field where the spectral density is not zero. We show that the proposed estimator is asymptotically unbiased and consistent. Thus we obtain an estimate solving the two difficulties concerning the choice of the instants of observations of a continuous time process and the observations affected by a constant error.

Keywords: Spectral density, stable processes, aliasing, periodogram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 649
687 Biometric Steganography Using Variable Length Embedding

Authors: Souvik Bhattacharyya, Indradip Banerjee, Anumoy Chakraborty, Gautam Sanyal

Abstract:

Recent growth in digital multimedia technologies has presented a lot of facilities in information transmission, reproduction and manipulation. Therefore, the concept of information security is one of the superior articles in the present day situation. The biometric information security is one of the information security mechanisms. It has the advantages as well as disadvantages. The biometric system is at risk to a range of attacks. These attacks are anticipated to bypass the security system or to suspend the normal functioning. Various hazards have been discovered while using biometric system. Proper use of steganography greatly reduces the risks in biometric systems from the hackers. Steganography is one of the fashionable information hiding technique. The goal of steganography is to hide information inside a cover medium like text, image, audio, video etc. through which it is not possible to detect the existence of the secret information. Here in this paper a new security concept has been established by making the system more secure with the help of steganography along with biometric security. Here the biometric information has been embedded to a skin tone portion of an image with the help of proposed steganographic technique.

Keywords: Biometrics, Skin tone detection, Series, Polynomial, Cover Image, Stego Image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2652
686 Flow of a Second Order Fluid through Constricted Tube with Slip Velocity at Wall Using Integral Method

Authors: Nosheen Zareen Khan, Abdul Majeed Siddiqui, Muhammad Afzal Rana

Abstract:

The steady flow of a second order fluid through constricted tube with slip velocity at wall is modeled and analyzed theoretically. The governing equations are simplified by implying no slip in radial direction. Based on Karman Pohlhausen procedure polynomial solution for axial velocity profile is presented. Expressions for pressure gradient, shear stress, separation and reattachment points, and radial velocity are also calculated. The effect of slip and no slip velocity on magnitude velocity, shear stress, and pressure gradient are discussed and depicted graphically. It is noted that when Reynolds number increases magnitude velocity of the fluid decreases in both slip and no slip conditions. It is also found that the wall shear stress, separation, and reattachment points are strongly affected by Reynolds number.

Keywords: Approximate solution, constricted tube, non-Newtonian fluids, Reynolds number.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1717
685 Simulation of Heat Transfer in the Multi-Layer Door of the Furnace

Authors: U. Prasopchingchana

Abstract:

The temperature distribution and the heat transfer rates through a multi-layer door of a furnace were investigated. The inside of the door was in contact with hot air and the other side of the door was in contact with room air. Radiation heat transfer from the walls of the furnace to the door and the door to the surrounding area was included in the problem. This work is a two dimensional steady state problem. The Churchill and Chu correlation was used to find local convection heat transfer coefficients at the surfaces of the furnace door. The thermophysical properties of air were the functions of the temperatures. Polynomial curve fitting for the fluid properties were carried out. Finite difference method was used to discretize for conduction heat transfer within the furnace door. The Gauss-Seidel Iteration was employed to compute the temperature distribution in the door. The temperature distribution in the horizontal mid plane of the furnace door in a two dimensional problem agrees with the one dimensional problem. The local convection heat transfer coefficients at the inside and outside surfaces of the furnace door are exhibited.

Keywords: Conduction, heat transfer, multi-layer door, natural convection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2088
684 Quantum Computing: A New Era of Computing

Authors: Jyoti Chaturvedi Gursaran

Abstract:

Nature conducts its action in a very private manner. To reveal these actions classical science has done a great effort. But classical science can experiment only with the things that can be seen with eyes. Beyond the scope of classical science quantum science works very well. It is based on some postulates like qubit, superposition of two states, entanglement, measurement and evolution of states that are briefly described in the present paper. One of the applications of quantum computing i.e. implementation of a novel quantum evolutionary algorithm(QEA) to automate the time tabling problem of Dayalbagh Educational Institute (Deemed University) is also presented in this paper. Making a good timetable is a scheduling problem. It is NP-hard, multi-constrained, complex and a combinatorial optimization problem. The solution of this problem cannot be obtained in polynomial time. The QEA uses genetic operators on the Q-bit as well as updating operator of quantum gate which is introduced as a variation operator to converge toward better solutions.

Keywords: Quantum computing, qubit, superposition, entanglement, measurement of states, evolution of states, Scheduling problem, hard and soft constraints, evolutionary algorithm, quantum evolutionary algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2647
683 Coerced Delay and Multi Additive Constraints QoS Routing Schemes

Authors: P.S. Prakash, S. Selvan

Abstract:

IP networks are evolving from data communication infrastructure into many real-time applications such as video conferencing, IP telephony and require stringent Quality of Service (QoS) requirements. A rudimentary issue in QoS routing is to find a path between a source-destination pair that satisfies two or more endto- end constraints and termed to be NP hard or complete. In this context, we present an algorithm Multi Constraint Path Problem Version 3 (MCPv3), where all constraints are approximated and return a feasible path in much quicker time. We present another algorithm namely Delay Coerced Multi Constrained Routing (DCMCR) where coerce one constraint and approximate the remaining constraints. Our algorithm returns a feasible path, if exists, in polynomial time between a source-destination pair whose first weight satisfied by the first constraint and every other weight is bounded by remaining constraints by a predefined approximation factor (a). We present our experimental results with different topologies and network conditions.

Keywords: Routing, Quality-of-Service (QoS), additive constraints, shortest path, delay coercion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298
682 2-DOF Observer Based Controller for First Order with Dead Time Systems

Authors: Ashu Ahuja, Shiv Narayan, Jagdish Kumar

Abstract:

This paper realized the 2-DOF controller structure for first order with time delay systems. The co-prime factorization is used to design observer based controller K(s), representing one degree of freedom. The problem is based on H∞ norm of mixed sensitivity and aims to achieve stability, robustness and disturbance rejection. Then, the other degree of freedom, prefilter F(s), is formulated as fixed structure polynomial controller to meet open loop processing of reference model. This model matching problem is solved by minimizing integral square error between reference model and proposed model. The feedback controller and prefilter designs are posed as optimization problem and solved using Particle Swarm Optimization (PSO). To show the efficiency of the designed approach different variety of processes are taken and compared for analysis.

Keywords: 2-DOF, integral square error, mixed sensitivity function, observer based controller, particle swarm optimization, prefilter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2427
681 Fast and Efficient Algorithms for Evaluating Uniform and Nonuniform Lagrange and Newton Curves

Authors: Taweechai Nuntawisuttiwong, Natasha Dejdumrong

Abstract:

Newton-Lagrange Interpolations are widely used in numerical analysis. However, it requires a quadratic computational time for their constructions. In computer aided geometric design (CAGD), there are some polynomial curves: Wang-Ball, DP and Dejdumrong curves, which have linear time complexity algorithms. Thus, the computational time for Newton-Lagrange Interpolations can be reduced by applying the algorithms of Wang-Ball, DP and Dejdumrong curves. In order to use Wang-Ball, DP and Dejdumrong algorithms, first, it is necessary to convert Newton-Lagrange polynomials into Wang-Ball, DP or Dejdumrong polynomials. In this work, the algorithms for converting from both uniform and non-uniform Newton-Lagrange polynomials into Wang-Ball, DP and Dejdumrong polynomials are investigated. Thus, the computational time for representing Newton-Lagrange polynomials can be reduced into linear complexity. In addition, the other utilizations of using CAGD curves to modify the Newton-Lagrange curves can be taken.

Keywords: Newton interpolation, Lagrange interpolation, linear complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 604
680 A Non-Linear Eddy Viscosity Model for Turbulent Natural Convection in Geophysical Flows

Authors: J. P. Panda, K. Sasmal, H. V. Warrior

Abstract:

Eddy viscosity models in turbulence modeling can be mainly classified as linear and nonlinear models. Linear formulations are simple and require less computational resources but have the disadvantage that they cannot predict actual flow pattern in complex geophysical flows where streamline curvature and swirling motion are predominant. A constitutive equation of Reynolds stress anisotropy is adopted for the formulation of eddy viscosity including all the possible higher order terms quadratic in the mean velocity gradients, and a simplified model is developed for actual oceanic flows where only the vertical velocity gradients are important. The new model is incorporated into the one dimensional General Ocean Turbulence Model (GOTM). Two realistic oceanic test cases (OWS Papa and FLEX' 76) have been investigated. The new model predictions match well with the observational data and are better in comparison to the predictions of the two equation k-epsilon model. The proposed model can be easily incorporated in the three dimensional Princeton Ocean Model (POM) to simulate a wide range of oceanic processes. Practically, this model can be implemented in the coastal regions where trasverse shear induces higher vorticity, and for prediction of flow in estuaries and lakes, where depth is comparatively less. The model predictions of marine turbulence and other related data (e.g. Sea surface temperature, Surface heat flux and vertical temperature profile) can be utilized in short term ocean and climate forecasting and warning systems.

Keywords: Eddy viscosity, turbulence modeling, GOTM, CFD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 939
679 Solving Linear Matrix Equations by Matrix Decompositions

Authors: Yongxin Yuan, Kezheng Zuo

Abstract:

In this paper, a system of linear matrix equations is considered. A new necessary and sufficient condition for the consistency of the equations is derived by means of the generalized singular-value decomposition, and the explicit representation of the general solution is provided.

Keywords: Matrix equation, Generalized inverse, Generalized singular-value decomposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2046
678 The Contraction Point for Phan-Thien/Tanner Model of Tube-Tooling Wire-Coating Flow

Authors: V. Ngamaramvaranggul, S. Thenissara

Abstract:

The simulation of extrusion process is studied widely in order to both increase products and improve quality, with broad application in wire coating. The annular tube-tooling extrusion was set up by a model that is termed as Navier-Stokes equation in addition to a rheological model of differential form based on singlemode exponential Phan-Thien/Tanner constitutive equation in a twodimensional cylindrical coordinate system for predicting the contraction point of the polymer melt beyond the die. Numerical solutions are sought through semi-implicit Taylor-Galerkin pressurecorrection finite element scheme. The investigation was focused on incompressible creeping flow with long relaxation time in terms of Weissenberg numbers up to 200. The isothermal case was considered with surface tension effect on free surface in extrudate flow and no slip at die wall. The Stream Line Upwind Petrov-Galerkin has been proposed to stabilize solution. The structure of mesh after die exit was adjusted following prediction of both top and bottom free surfaces so as to keep the location of contraction point around one unit length which is close to experimental results. The simulation of extrusion process is studied widely in order to both increase products and improve quality, with broad application in wire coating. The annular tube-tooling extrusion was set up by a model that is termed as Navier-Stokes equation in addition to a rheological model of differential form based on single-mode exponential Phan- Thien/Tanner constitutive equation in a two-dimensional cylindrical coordinate system for predicting the contraction point of the polymer melt beyond the die. Numerical solutions are sought through semiimplicit Taylor-Galerkin pressure-correction finite element scheme. The investigation was focused on incompressible creeping flow with long relaxation time in terms of Weissenberg numbers up to 200. The isothermal case was considered with surface tension effect on free surface in extrudate flow and no slip at die wall. The Stream Line Upwind Petrov-Galerkin has been proposed to stabilize solution. The structure of mesh after die exit was adjusted following prediction of both top and bottom free surfaces so as to keep the location of contraction point around one unit length which is close to experimental results.

Keywords: wire coating, free surface, tube-tooling, extrudate swell, surface tension, finite element method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2005
677 Generalized Maximal Ratio Combining as a Supra-optimal Receiver Diversity Scheme

Authors: Jean-Pierre Dubois, Rania Minkara, Rafic Ayoubi

Abstract:

Maximal Ratio Combining (MRC) is considered the most complex combining technique as it requires channel coefficients estimation. It results in the lowest bit error rate (BER) compared to all other combining techniques. However the BER starts to deteriorate as errors are introduced in the channel coefficients estimation. A novel combining technique, termed Generalized Maximal Ratio Combining (GMRC) with a polynomial kernel, yields an identical BER as MRC with perfect channel estimation and a lower BER in the presence of channel estimation errors. We show that GMRC outperforms the optimal MRC scheme in general and we hereinafter introduce it to the scientific community as a new “supraoptimal" algorithm. Since diversity combining is especially effective in small femto- and pico-cells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to IP-based 4th generation networks.

Keywords: Bit error rate, femto-internet cells, generalized maximal ratio combining, signal-to-scattering noise ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2142
676 Complexity Analysis of Some Known Graph Coloring Instances

Authors: Jeffrey L. Duffany

Abstract:

Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.

Keywords: graph coloring, complexity, algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393
675 Mean-Variance Optimization of Portfolios with Return of Premium Clauses in a DC Pension Plan with Multiple Contributors under Constant Elasticity of Variance Model

Authors: Bright O. Osu, Edikan E. Akpanibah, Chidinma Olunkwa

Abstract:

In this paper, mean-variance optimization of portfolios with the return of premium clauses in a defined contribution (DC) pension plan with multiple contributors under constant elasticity of variance (CEV) model is studied. The return clauses which permit death members to claim their accumulated wealth are considered, the remaining wealth is not equally distributed by the remaining members as in literature. We assume that before investment, the surplus which includes funds of members who died after retirement adds to the total wealth. Next, we consider investments in a risk-free asset and a risky asset to meet up the expected returns of the remaining members and obtain an optimized problem with the help of extended Hamilton Jacobi Bellman equation. We obtained the optimal investment strategies for the two assets and the efficient frontier of the members by using a stochastic optimal control technique. Furthermore, we studied the effect of the various parameters of the optimal investment strategies and the effect of the risk-averse level on the efficient frontier. We observed that the optimal investment strategy is the same as in literature, secondly, we observed that the surplus decreases the proportion of the wealth invested in the risky asset.

Keywords: DC pension fund, Hamilton Jacobi Bellman equation, optimal investment strategies, stochastic optimal control technique, return of premiums clauses, mean-variance utility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 764
674 Appraisal of Methods for Identifying, Mapping, and Modelling of Fluvial Erosion in a Mining Environment

Authors: F. F. Howard, I. Yakubu, C. B. Boye, J. S. Y. Kuma

Abstract:

Natural and human activities, such as mining operations, expose the natural soil to adverse environmental conditions, leading to contamination of soil, groundwater, and surface water, which has negative effects on humans, flora, and fauna. Bare or partly exposed soil is most liable to fluvial erosion. This paper enumerates various methods used to identify, map, and model fluvial erosion in a mining environment. Classical, Artificial Intelligence (AI), and GIS methods have been reviewed. One of the many classical methods used to estimate river erosion is the Revised Universal Soil Loss Equation (RUSLE) model. The RUSLE model is easy to use. Its reliance on empirical relationships that may not always be applicable to specific circumstances or locations is a flaw. Other classical models for estimating fluvial erosion are the Soil and Water Assessment Tool (SWAT) and the Universal Soil Loss Equation (USLE). These models offer a more complete understanding of the underlying physical processes and encompass a wider range of situations. Although more difficult to utilise, they depend on the availability and dependability of input data for correctness. AI can help deal with multivariate and complex difficulties and predict soil loss with higher accuracy than traditional methods, and also be used to build unique models for identifying degraded areas. AI techniques have become popular as an alternative predictor for degraded environments. However, this research proposed a hybrid of classical, AI, and GIS methods for efficient and effective modelling of fluvial erosion.

Keywords: Fluvial erosion, classical methods, Artificial Intelligence, Geographic Information System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 164
673 Backstepping Design and Fractional Derivative Equation of Chaotic System

Authors: Ayub Khan, Net Ram Garg, Geeta Jain

Abstract:

In this paper, Backstepping method is proposed to synchronize two fractional-order systems. The simulation results show that this method can effectively synchronize two chaotic systems.

Keywords: Backstepping method, Fractional order, Synchronization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2131
672 Application of the Least Squares Method in the Adjustment of Chlorodifluoromethane (HCFC-142b) Regression Models

Authors: L. J. de Bessa Neto, V. S. Filho, J. V. Ferreira Nunes, G. C. Bergamo

Abstract:

There are many situations in which human activities have significant effects on the environment. Damage to the ozone layer is one of them. The objective of this work is to use the Least Squares Method, considering the linear, exponential, logarithmic, power and polynomial models of the second degree, to analyze through the coefficient of determination (R²), which model best fits the behavior of the chlorodifluoromethane (HCFC-142b) in parts per trillion between 1992 and 2018, as well as estimates of future concentrations between 5 and 10 periods, i.e. the concentration of this pollutant in the years 2023 and 2028 in each of the adjustments. A total of 809 observations of the concentration of HCFC-142b in one of the monitoring stations of gases precursors of the deterioration of the ozone layer during the period of time studied were selected and, using these data, the statistical software Excel was used for make the scatter plots of each of the adjustment models. With the development of the present study, it was observed that the logarithmic fit was the model that best fit the data set, since besides having a significant R² its adjusted curve was compatible with the natural trend curve of the phenomenon.

Keywords: Chlorodifluoromethane (HCFC-142b), ozone (O3), least squares method, regression models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 813
671 A New Verified Method for Solving Nonlinear Equations

Authors: Taher Lotfi , Parisa Bakhtiari , Katayoun Mahdiani , Mehdi Salimi

Abstract:

In this paper, verified extension of the Ostrowski method which calculates the enclosure solutions of a given nonlinear equation is introduced. Also, error analysis and convergence will be discussed. Some implemented examples with INTLAB are also included to illustrate the validity and applicability of the scheme.

Keywords: Iinterval analysis, nonlinear equations, Ostrowski method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500
670 Accuracy of Displacement Estimation and Selection of Capacitors for a Four Degrees of Freedom Capacitive Force Sensor

Authors: Chisato Murakami, Makoto Takahashi

Abstract:

Force sensor has been used as requisite for knowing information on the amount and the directions of forces on the skin surface. We have developed a four-degrees-of-freedom capacitive force sensor (approximately 20×20×5 mm3) that has a flexible structure and sixteen parallel plate capacitors. An iterative algorithm was developed for estimating four displacements from the sixteen capacitances using fourth-order polynomial approximation of characteristics between capacitance and displacement. The estimation results from measured capacitances had large error caused by deterioration of the characteristics. In this study, effective capacitors had major information were selected on the basis of the capacitance change range and the characteristic shape. Maximum errors in calibration and non-calibration points were 25%and 6.8%.However the maximum error was larger than desired value, the smallness of averaged value indicated the occurrence of a few large error points. On the other hand, error in non-calibration point was within desired value.

 

Keywords: Force sensors, capacitive sensors, estimation, iterative algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1604
669 Damping and Stability Evaluation for the Dynamical Hunting Motion of the Bullet Train Wheel Axle Equipped with Cylindrical Wheel Treads

Authors: Barenten Suciu

Abstract:

Classical matrix calculus and Routh-Hurwitz stability conditions, applied to the snake-like motion of the conical wheel axle, lead to the conclusion that the hunting mode is inherently unstable, and its natural frequency is a complex number. In order to analytically solve such a complicated vibration model, either the inertia terms were neglected, in the model designated as geometrical, or restrictions on the creep coefficients and yawing diameter were imposed, in the so-called dynamical model. Here, an alternative solution is proposed to solve the hunting mode, based on the observation that the bullet train wheel axle is equipped with cylindrical wheels. One argues that for such wheel treads, the geometrical hunting is irrelevant, since its natural frequency becomes nil, but the dynamical hunting is significant since its natural frequency reduces to a real number. Moreover, one illustrates that the geometrical simplification of the wheel causes the stabilization of the hunting mode, since the characteristic quartic equation, derived for conical wheels, reduces to a quadratic equation of positive coefficients, for cylindrical wheels. Quite simple analytical expressions for the damping ratio and natural frequency are obtained, without applying restrictions into the model of contact. Graphs of the time-depending hunting lateral perturbation, including the maximal and inflexion points, are presented both for the critically-damped and the over-damped wheel axles.

Keywords: Bullet train, dynamical hunting, cylindrical wheels, damping, stability, creep, vibration analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 745
668 Existence of Solution for Boundary Value Problems of Differential Equations with Delay

Authors: Xiguang Li

Abstract:

In this paper , by using fixed point theorem , upper and lower solution-s method and monotone iterative technique , we prove the existence of maximum and minimum solutions of differential equations with delay , which improved and generalize the result of related paper.

Keywords: Banach space, boundary value problem, differential equation, delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1220
667 Hydrolysis of Hull-Less Pumpkin Oil Cake Protein Isolate by Pepsin

Authors: Ivan Živanović, Žužana Vaštag, Senka Popović, Ljiljana Popović, Draginja Peričin

Abstract:

The present work represents an investigation of the hydrolysis of hull-less pumpkin (Cucurbita Pepo L.) oil cake protein isolate (PuOC PI) by pepsin. To examine the effectiveness and suitability of pepsin towards PuOC PI the kinetic parameters for pepsin on PuOC PI were determined and then, the hydrolysis process was studied using Response Surface Methodology (RSM). The hydrolysis was carried out at temperature of 30°C and pH 3.00. Time and initial enzyme/substrate ratio (E/S) at three levels were selected as the independent parameters. The degree of hydrolysis, DH, was mesuared after 20, 30 and 40 minutes, at initial E/S of 0.7, 1 and 1.3 mA/mg proteins. Since the proposed second-order polynomial model showed good fit with the experimental data (R2 = 0.9822), the obtained mathematical model could be used for monitoring the hydrolysis of PuOC PI by pepsin, under studied experimental conditions, varying the time and initial E/S. To achieve the highest value of DH (39.13 %), the obtained optimum conditions for time and initial E/S were 30 min and 1.024 mA/mg proteins.

Keywords: Enzymatic hydrolysis, Pepsin, Pumpkin (CucurbitaPepo L.) oil cake protein isolate, Response surface methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161
666 Despiking of Turbulent Flow Data in Gravel Bed Stream

Authors: Ratul Das

Abstract:

The present experimental study insights the decontamination of instantaneous velocity fluctuations captured by Acoustic Doppler Velocimeter (ADV) in gravel-bed streams to ascertain near-bed turbulence for low Reynolds number. The interference between incidental and reflected pulses produce spikes in the ADV data especially in the near-bed flow zone and therefore filtering the data are very essential. Nortek’s Vectrino four-receiver ADV probe was used to capture the instantaneous three-dimensional velocity fluctuations over a non-cohesive bed. A spike removal algorithm based on the acceleration threshold method was applied to note the bed roughness and its influence on velocity fluctuations and velocity power spectra in the carrier fluid. The velocity power spectra of despiked signals with a best combination of velocity threshold (VT) and acceleration threshold (AT) are proposed which ascertained velocity power spectra a satisfactory fit with the Kolmogorov “–5/3 scaling-law” in the inertial sub-range. Also, velocity distributions below the roughness crest level fairly follows a third-degree polynomial series.

Keywords: Acoustic Doppler Velocimeter, gravel-bed, spike removal, Reynolds shear stress, near-bed turbulence, velocity power spectra.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1171