Search results for: τ1τ2-regular generalized star star closed sets.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1394

Search results for: τ1τ2-regular generalized star star closed sets.

1064 Generalized Predictive Control of Batch Polymerization Reactor

Authors: R. Khaniki, M.B. Menhaj, H. Eliasi

Abstract:

This paper describes the application of a model predictive controller to the problem of batch reactor temperature control. Although a great deal of work has been done to improve reactor throughput using batch sequence control, the control of the actual reactor temperature remains a difficult problem for many operators of these processes. Temperature control is important as many chemical reactions are sensitive to temperature for formation of desired products. This controller consist of two part (1) a nonlinear control method GLC (Global Linearizing Control) to create a linear model of system and (2) a Model predictive controller used to obtain optimal input control sequence. The temperature of reactor is tuned to track a predetermined temperature trajectory that applied to the batch reactor. To do so two input signals, electrical powers and the flow of coolant in the coil are used. Simulation results show that the proposed controller has a remarkable performance for tracking reference trajectory while at the same time it is robust against noise imposed to system output.

Keywords: Generalized Predictive Control (GPC), TemperatureControl, Global Linearizing Control (GLC), Batch Reactor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
1063 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad S. Daba, J. P. Dubois

Abstract:

Fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper utilized a Poisson modulated-weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multidiversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.

Keywords: Cellular communication, femto- and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1310
1062 Discovering Liouville-Type Problems for p-Energy Minimizing Maps in Closed Half-Ellipsoids by Calculus Variation Method

Authors: Lina Wu, Jia Liu, Ye Li

Abstract:

The goal of this project is to investigate constant properties (called the Liouville-type Problem) for a p-stable map as a local or global minimum of a p-energy functional where the domain is a Euclidean space and the target space is a closed half-ellipsoid. The First and Second Variation Formulas for a p-energy functional has been applied in the Calculus Variation Method as computation techniques. Stokes’ Theorem, Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and the Bochner Formula as estimation techniques have been used to estimate the lower bound and the upper bound of the derived p-Harmonic Stability Inequality. One challenging point in this project is to construct a family of variation maps such that the images of variation maps must be guaranteed in a closed half-ellipsoid. The other challenging point is to find a contradiction between the lower bound and the upper bound in an analysis of p-Harmonic Stability Inequality when a p-energy minimizing map is not constant. Therefore, the possibility of a non-constant p-energy minimizing map has been ruled out and the constant property for a p-energy minimizing map has been obtained. Our research finding is to explore the constant property for a p-stable map from a Euclidean space into a closed half-ellipsoid in a certain range of p. The certain range of p is determined by the dimension values of a Euclidean space (the domain) and an ellipsoid (the target space). The certain range of p is also bounded by the curvature values on an ellipsoid (that is, the ratio of the longest axis to the shortest axis). Regarding Liouville-type results for a p-stable map, our research finding on an ellipsoid is a generalization of mathematicians’ results on a sphere. Our result is also an extension of mathematicians’ Liouville-type results from a special ellipsoid with only one parameter to any ellipsoid with (n+1) parameters in the general setting.

Keywords: Bochner Formula, Stokes’ Theorem, Cauchy-Schwarz Inequality, first and second variation formulas, Hardy-Sobolev type inequalities, Liouville-type problem, p-harmonic map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 866
1061 Properties and Approximation Distribution Reductions in Multigranulation Rough Set Model

Authors: Properties, Approximation Distribution Reductions in Multigranulation Rough Set Model

Abstract:

Some properties of approximation sets are studied in multi-granulation optimist model in rough set theory using maximal compatible classes. The relationships between or among lower and upper approximations in single and multiple granulation are compared and discussed. Through designing Boolean functions and discernibility matrices in incomplete information systems, the lower and upper approximation sets and reduction in multi-granulation environments can be found. By using examples, the correctness of computation approach is consolidated. The related conclusions obtained are suitable for further investigating in multiple granulation RSM.

Keywords: Incomplete information system, maximal compatible class, multi-granulation rough set model, reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 813
1060 Urban Waste Water Governance in South Africa: A Case Study of Stellenbosch

Authors: R. Malisa, E. Schwella, K. I. Theletsane

Abstract:

Due to climate change, population growth and rapid urbanization, the demand for water in South Africa is inevitably surpassing supply. To address similar challenges globally, there has been a paradigm shift from conventional urban waste water management “government” to a “governance” paradigm. From the governance paradigm, Integrated Urban Water Management (IUWM) principle emerged. This principle emphasizes efficient urban waste water treatment and production of high-quality recyclable effluent. In so doing mimicking natural water systems, in their processes of recycling water efficiently, and averting depletion of natural water resources.  The objective of this study was to investigate drivers of shifting the current urban waste water management approach from a “government” paradigm towards “governance”. The study was conducted through Interactive Management soft systems research methodology which follows a qualitative research design. A case study methodology was employed, guided by realism research philosophy. Qualitative data gathered were analyzed through interpretative structural modelling using Concept Star for Professionals Decision-Making tools (CSPDM) version 3.64.  The constructed model deduced that the main drivers in shifting the Stellenbosch municipal urban waste water management towards IUWM “governance” principles are mainly social elements characterized by overambitious expectations of the public on municipal water service delivery, mis-interpretation of the constitution on access to adequate clean water and sanitation as a human right and perceptions on recycling water by different communities. Inadequate public participation also emerged as a strong driver. However, disruptive events such as draught may play a positive role in raising an awareness on the value of water, resulting in a shift on the perceptions on recycled water. Once the social elements are addressed, the alignment of governance and administration elements towards IUWM are achievable. Hence, the point of departure for the desired paradigm shift is the change of water service authorities and serviced communities’ perceptions and behaviors towards shifting urban waste water management approaches from “government” to “governance” paradigm.

Keywords: Integrated urban water management, urban water system, waste water governance, waste water treatment works.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1026
1059 Comparison of Imputation Techniques for Efficient Prediction of Software Fault Proneness in Classes

Authors: Geeta Sikka, Arvinder Kaur Takkar, Moin Uddin

Abstract:

Missing data is a persistent problem in almost all areas of empirical research. The missing data must be treated very carefully, as data plays a fundamental role in every analysis. Improper treatment can distort the analysis or generate biased results. In this paper, we compare and contrast various imputation techniques on missing data sets and make an empirical evaluation of these methods so as to construct quality software models. Our empirical study is based on NASA-s two public dataset. KC4 and KC1. The actual data sets of 125 cases and 2107 cases respectively, without any missing values were considered. The data set is used to create Missing at Random (MAR) data Listwise Deletion(LD), Mean Substitution(MS), Interpolation, Regression with an error term and Expectation-Maximization (EM) approaches were used to compare the effects of the various techniques.

Keywords: Missing data, Imputation, Missing Data Techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625
1058 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes

Authors: V. Churkin, M. Lopatin

Abstract:

The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second – 95,3%.

Keywords: Bass model, generalized Bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1842
1057 A Generalized Sparse Bayesian Learning Algorithm for Near-Field Synthetic Aperture Radar Imaging: By Exploiting Impropriety and Noncircularity

Authors: Pan Long, Bi Dongjie, Li Xifeng, Xie Yongle

Abstract:

The near-field synthetic aperture radar (SAR) imaging is an advanced nondestructive testing and evaluation (NDT&E) technique. This paper investigates the complex-valued signal processing related to the near-field SAR imaging system, where the measurement data turns out to be noncircular and improper, meaning that the complex-valued data is correlated to its complex conjugate. Furthermore, we discover that the degree of impropriety of the measurement data and that of the target image can be highly correlated in near-field SAR imaging. Based on these observations, A modified generalized sparse Bayesian learning algorithm is proposed, taking impropriety and noncircularity into account. Numerical results show that the proposed algorithm provides performance gain, with the help of noncircular assumption on the signals.

Keywords: Complex-valued signal processing, synthetic aperture radar (SAR), 2-D radar imaging, compressive sensing, Sparse Bayesian learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
1056 Comanche – A Compiler-Driven I/O Management System

Authors: Wendy Zhang, Ernst L. Leiss, Huilin Ye

Abstract:

Most scientific programs have large input and output data sets that require out-of-core programming or use virtual memory management (VMM). Out-of-core programming is very error-prone and tedious; as a result, it is generally avoided. However, in many instance, VMM is not an effective approach because it often results in substantial performance reduction. In contrast, compiler driven I/O management will allow a program-s data sets to be retrieved in parts, called blocks or tiles. Comanche (COmpiler MANaged caCHE) is a compiler combined with a user level runtime system that can be used to replace standard VMM for out-of-core programs. We describe Comanche and demonstrate on a number of representative problems that it substantially out-performs VMM. Significantly our system does not require any special services from the operating system and does not require modification of the operating system kernel.

Keywords: I/O Management, Out-of-core, Compiler, Tile mapping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1280
1055 A Sociocybernetics Data Analysis Using Causality in Tourism Networks

Authors: M. Lloret-Climent, J. Nescolarde-Selva

Abstract:

The aim of this paper is to propose a mathematical model to determine invariant sets, set covering, orbits and, in particular, attractors in the set of tourism variables. Analysis was carried out based on a pre-designed algorithm and applying our interpretation of chaos theory developed in the context of General Systems Theory. This article sets out the causal relationships associated with tourist flows in order to enable the formulation of appropriate strategies. Our results can be applied to numerous cases. For example, in the analysis of tourist flows, these findings can be used to determine whether the behaviour of certain groups affects that of other groups and to analyse tourist behaviour in terms of the most relevant variables. Unlike statistical analyses that merely provide information on current data, our method uses orbit analysis to forecast, if attractors are found, the behaviour of tourist variables in the immediate future.

Keywords: Attractor, invariant set, orbits, tourist variables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706
1054 A Hybrid Neural Network and Traditional Approach for Forecasting Lumpy Demand

Authors: A. Nasiri Pour, B. Rostami Tabar, A.Rahimzadeh

Abstract:

Accurate demand forecasting is one of the most key issues in inventory management of spare parts. The problem of modeling future consumption becomes especially difficult for lumpy patterns, which characterized by intervals in which there is no demand and, periods with actual demand occurrences with large variation in demand levels. However, many of the forecasting methods may perform poorly when demand for an item is lumpy. In this study based on the characteristic of lumpy demand patterns of spare parts a hybrid forecasting approach has been developed, which use a multi-layered perceptron neural network and a traditional recursive method for forecasting future demands. In the described approach the multi-layered perceptron are adapted to forecast occurrences of non-zero demands, and then a conventional recursive method is used to estimate the quantity of non-zero demands. In order to evaluate the performance of the proposed approach, their forecasts were compared to those obtained by using Syntetos & Boylan approximation, recently employed multi-layered perceptron neural network, generalized regression neural network and elman recurrent neural network in this area. The models were applied to forecast future demand of spare parts of Arak Petrochemical Company in Iran, using 30 types of real data sets. The results indicate that the forecasts obtained by using our proposed mode are superior to those obtained by using other methods.

Keywords: Lumpy Demand, Neural Network, Forecasting, Hybrid Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2620
1053 Homomorphic Conceptual Framework for Effective Supply Chain Strategy (HCEFSC) within Operational Research (OR) with Sustainability and Phenomenology

Authors: Al-Salamin Hussain, Elias O. Tembe

Abstract:

Supply chain (SC) is an operational research (OR) approach and technique which acts as catalyst within central nervous system of business today. Without SC, any type of business is at doldrums, hence entropy. SC is the lifeblood of business today because it is the pivotal hub which provides imperative competitive advantage. The paper present a conceptual framework dubbed as Homomorphic Conceptual Framework for Effective Supply Chain Strategy (HCEFSC).The term Homomorphic is derived from abstract algebraic mathematical term homomorphism (same shape) which also embeds the following mathematical application sets: monomorphisms, isomorphism, automorphisms, and endomorphism. The HCFESC is intertwined and integrated with wide and broad sets of elements.

Keywords: Automorphisms, Homomorphism, Monomorphisms, Supply Chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
1052 Learning Algorithms for Fuzzy Inference Systems Composed of Double- and Single-Input Rule Modules

Authors: Hirofumi Miyajima, Kazuya Kishida, Noritaka Shigei, Hiromi Miyajima

Abstract:

Most of self-tuning fuzzy systems, which are automatically constructed from learning data, are based on the steepest descent method (SDM). However, this approach often requires a large convergence time and gets stuck into a shallow local minimum. One of its solutions is to use fuzzy rule modules with a small number of inputs such as DIRMs (Double-Input Rule Modules) and SIRMs (Single-Input Rule Modules). In this paper, we consider a (generalized) DIRMs model composed of double and single-input rule modules. Further, in order to reduce the redundant modules for the (generalized) DIRMs model, pruning and generative learning algorithms for the model are suggested. In order to show the effectiveness of them, numerical simulations for function approximation, Box-Jenkins and obstacle avoidance problems are performed.

Keywords: Box-Jenkins’s problem, Double-input rule module, Fuzzy inference model, Obstacle avoidance, Single-input rule module.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918
1051 A Neutral Set Approach for Applying TOPSIS in Maintenance Strategy Selection

Authors: C. Ardil

Abstract:

This paper introduces the concept of neutral sets (NSs) and explores various operations on NSs, along with their associated properties. The foundation of the Neutral Set framework lies in ontological neutrality and the principles of logic, including the Law of Non-Contradiction. By encompassing components for possibility, indeterminacy, and necessity, the NS framework provides a flexible representation of truth, uncertainty, and necessity, accommodating diverse ontological perspectives without presupposing specific existential commitments. The inclusion of Possibility acknowledges the spectrum of potential states or propositions, promoting neutrality by accommodating various viewpoints. Indeterminacy reflects the inherent uncertainty in understanding reality, refraining from making definitive ontological commitments in uncertain situations. Necessity captures propositions that must hold true under all circumstances, aligning with the principle of logical consistency and implicitly supporting the Law of Non-Contradiction. Subsequently, a neutral set-TOPSIS approach is applied in the maintenance strategy selection problem, demonstrating the practical applicability of the NS framework. The paper further explores uncertainty relations and presents the fundamental preliminaries of NS theory, emphasizing its role in fostering ontological neutrality and logical coherence in reasoning.

Keywords: Uncertainty sets, neutral sets, maintenance strategy selection multiple criteria decision-making analysis, MCDM, uncertainty decision analysis, distance function, multiple attribute, decision making, selection method, uncertainty, TOPSIS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28
1050 Study of Natural Convection Heat Transfer of Plate-Fin Heat Sink in a Closed Enclosure

Authors: Han-Taw Chen, Tzu-Hsiang Lin, Chung-Hou Lai

Abstract:

The present study applies the inverse method and three-dimensional CFD commercial software in conjunction with the experimental temperature data to investigate the heat transfer and fluid flow characteristics of the plate-fin heat sink in a rectangular closed enclosure. The inverse method with the finite difference method and the experimental temperature data is applied to determine the approximate heat transfer coefficient. Later, based on the obtained results, the zero-equation turbulence model is used to obtain the heat transfer and fluid flow characteristics between two fins. T0 validate the accuracy of the results obtained, the comparison of the heat transfer coefficient is made. The obtained temperature at selected measurement locations of the fin is also compared with experimental data. The effect of the height of the rectangular enclosure on the obtained results is discussed.

Keywords: Inverse method, FLUENT, Plate-fin heat sink, Heat transfer characteristics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2200
1049 Experimental Results about the Dynamics of the Generalized Belief Propagation Used on LDPC Codes

Authors: Jean-Christophe Sibel, Sylvain Reynal, David Declercq

Abstract:

In the context of channel coding, the Generalized Belief Propagation (GBP) is an iterative algorithm used to recover the transmission bits sent through a noisy channel. To ensure a reliable transmission, we apply a map on the bits, that is called a code. This code induces artificial correlations between the bits to send, and it can be modeled by a graph whose nodes are the bits and the edges are the correlations. This graph, called Tanner graph, is used for most of the decoding algorithms like Belief Propagation or Gallager-B. The GBP is based on a non unic transformation of the Tanner graph into a so called region-graph. A clear advantage of the GBP over the other algorithms is the freedom in the construction of this graph. In this article, we explain a particular construction for specific graph topologies that involves relevant performance of the GBP. Moreover, we investigate the behavior of the GBP considered as a dynamic system in order to understand the way it evolves in terms of the time and in terms of the noise power of the channel. To this end we make use of classical measures and we introduce a new measure called the hyperspheres method that enables to know the size of the attractors.

Keywords: iterative decoder, LDPC, region-graph, chaos.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
1048 Computational and Experimental Investigation of Supersonic Flow and their Controls

Authors: Vasana M. Don, Eldad J. Avital, Fariborz Motallebi

Abstract:

Supersonic open and closed cavity flows are investigated experimentally and computationally. Free stream Mach number of two is set. Schlieren imaging is used to visualise the flow behaviour showing stark differences between open and closed. Computational Fluid Dynamics (CFD) is used to simulate open cavity of flow with aspect ratio of 4. A rear wall treatment is implemented in order to pursue a simple passive control approach. Good qualitative agreement is achieved between the experimental flow visualisation and the CFD in terms of the expansion-shock waves system. The cavity oscillations are shown to be dominated by the first and third Rossister modes combining to high fluctuations of non-linear nature above the cavity rear edge. A simple rear wall treatment in terms of a hole shows mixed effect on the flow oscillations, RMS contours, and time history density fluctuations are given and analysed.

Keywords: Supersonic, Schlieren, open-cavity, flow simulation, passive control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2349
1047 Transient Population Dynamics of Phase Singularities in 2D Beeler-Reuter Model

Authors: Hidetoshi Konno, Akio Suzuki

Abstract:

The paper presented a transient population dynamics of phase singularities in 2D Beeler-Reuter model. Two stochastic modelings are examined: (i) the Master equation approach with the transition rate (i.e., λ(n, t) = λ(t)n and μ(n, t) = μ(t)n) and (ii) the nonlinear Langevin equation approach with a multiplicative noise. The exact general solution of the Master equation with arbitrary time-dependent transition rate is given. Then, the exact solution of the mean field equation for the nonlinear Langevin equation is also given. It is demonstrated that transient population dynamics is successfully identified by the generalized Logistic equation with fractional higher order nonlinear term. It is also demonstrated the necessity of introducing time-dependent transition rate in the master equation approach to incorporate the effect of nonlinearity.

Keywords: Transient population dynamics, Phase singularity, Birth-death process, Non-stationary Master equation, nonlinear Langevin equation, generalized Logistic equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547
1046 On the Noise Distance in Robust Fuzzy C-Means

Authors: M. G. C. A. Cimino, G. Frosini, B. Lazzerini, F. Marcelloni

Abstract:

In the last decades, a number of robust fuzzy clustering algorithms have been proposed to partition data sets affected by noise and outliers. Robust fuzzy C-means (robust-FCM) is certainly one of the most known among these algorithms. In robust-FCM, noise is modeled as a separate cluster and is characterized by a prototype that has a constant distance δ from all data points. Distance δ determines the boundary of the noise cluster and therefore is a critical parameter of the algorithm. Though some approaches have been proposed to automatically determine the most suitable δ for the specific application, up to today an efficient and fully satisfactory solution does not exist. The aim of this paper is to propose a novel method to compute the optimal δ based on the analysis of the distribution of the percentage of objects assigned to the noise cluster in repeated executions of the robust-FCM with decreasing values of δ . The extremely encouraging results obtained on some data sets found in the literature are shown and discussed.

Keywords: noise prototype, robust fuzzy clustering, robustfuzzy C-means

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1774
1045 Optimal Data Compression and Filtering: The Case of Infinite Signal Sets

Authors: Anatoli Torokhti, Phil Howlett

Abstract:

We present a theory for optimal filtering of infinite sets of random signals. There are several new distinctive features of the proposed approach. First, we provide a single optimal filter for processing any signal from a given infinite signal set. Second, the filter is presented in the special form of a sum with p terms where each term is represented as a combination of three operations. Each operation is a special stage of the filtering aimed at facilitating the associated numerical work. Third, an iterative scheme is implemented into the filter structure to provide an improvement in the filter performance at each step of the scheme. The final step of the concerns signal compression and decompression. This step is based on the solution of a new rank-constrained matrix approximation problem. The solution to the matrix problem is described in this paper. A rigorous error analysis is given for the new filter.

Keywords: stochastic signals, optimization problems in signal processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1248
1044 Generic Filtering of Infinite Sets of Stochastic Signals

Authors: Anatoli Torokhti, Phil Howlett

Abstract:

A theory for optimal filtering of infinite sets of random signals is presented. There are several new distinctive features of the proposed approach. First, a single optimal filter for processing any signal from a given infinite signal set is provided. Second, the filter is presented in the special form of a sum with p terms where each term is represented as a combination of three operations. Each operation is a special stage of the filtering aimed at facilitating the associated numerical work. Third, an iterative scheme is implemented into the filter structure to provide an improvement in the filter performance at each step of the scheme. The final step of the scheme concerns signal compression and decompression. This step is based on the solution of a new rank-constrained matrix approximation problem. The solution to the matrix problem is described in this paper. A rigorous error analysis is given for the new filter.

Keywords: Optimal filtering, data compression, stochastic signals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1277
1043 A Fault Tolerant Token-based Algorithm for Group Mutual Exclusion in Distributed Systems

Authors: Abhishek Swaroop, Awadhesh Kumar Singh

Abstract:

The group mutual exclusion (GME) problem is a variant of the mutual exclusion problem. In the present paper a token-based group mutual exclusion algorithm, capable of handling transient faults, is proposed. The algorithm uses the concept of dynamic request sets. A time out mechanism is used to detect the token loss; also, a distributed scheme is used to regenerate the token. The worst case message complexity of the algorithm is n+1. The maximum concurrency and forum switch complexity of the algorithm are n and min (n, m) respectively, where n is the number of processes and m is the number of groups. The algorithm also satisfies another desirable property called smooth admission. The scheme can also be adapted to handle the extended group mutual exclusion problem.

Keywords: Dynamic request sets, Fault tolerance, Smoothadmission, Transient faults.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619
1042 Aircraft Supplier Selection using Multiple Criteria Group Decision Making Process with Proximity Measure Method for Determinate Fuzzy Set Ranking Analysis

Authors: C. Ardil

Abstract:

Aircraft supplier selection process, which is considered as a fundamental supply chain problem, is a multi-criteria group decision problem that has a significant impact on the performance of the entire supply chain. In practical situations are frequently incomplete and uncertain information, making it difficult for decision-makers to communicate their opinions on candidates with precise and definite values. To solve the aircraft supplier selection problem in an environment of incomplete and uncertain information, proximity measure method is proposed. It uses determinate fuzzy numbers. The weights of each decision maker are equally predetermined and the entropic criteria weights are calculated using each decision maker's decision matrix. Additionally, determinate fuzzy numbers, it is proposed to use the weighted normalized Minkowski distance function and Hausdorff distance function to determine the ranking order patterns of alternatives. A numerical example for aircraft supplier selection is provided to further demonstrate the applicability, effectiveness, validity and rationality of the proposed method.

Keywords: Aircraft supplier selection, multiple criteria decision making, fuzzy sets, determinate fuzzy sets, intuitionistic fuzzy sets, proximity measure method, Minkowski distance function, Hausdorff distance function, PMM, MCDM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 297
1041 Statistical Analysis of the Impact of Maritime Transport Gross Domestic Product on Nigeria’s Economy

Authors: K. P. Oyeduntan, K. Oshinubi

Abstract:

Nigeria is referred as the ‘Giant of Africa’ due to high population, land mass and large economy. However, it still trails far behind many smaller economies in the continent in terms of maritime operations. As we have seen that the maritime industry is the sparkplug for national growth, because it houses the most crucial infrastructure that generates wealth for a nation, it is worrisome that a nation with six seaports lag in maritime activities. In this research, we have studied how the Gross Domestic Product (GDP) of the maritime transport influences the Nigerian economy. To do this, we applied Simple Linear Regression (SLR), Support Vector Machine (SVM), Polynomial Regression Model (PRM), Generalized Additive Model (GAM) and Generalized Linear Mixed Model (GLMM) to model the relationship between the nation’s Total GDP (TGDP) and the Maritime Transport GDP (MGDP) using a time series data of 20 years. The result showed that the MGDP is statistically significant to the Nigerian economy. Amongst the statistical tool applied, the PRM of order 4 describes the relationship better when compared to other methods. The recommendations presented in this study will guide policy makers and help improve the economy of Nigeria.

Keywords: Economy, GDP, maritime transport, port, regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 48
1040 Ensembling Classifiers – An Application toImage Data Classification from Cherenkov Telescope Experiment

Authors: Praveen Boinee, Alessandro De Angelis, Gian Luca Foresti

Abstract:

Ensemble learning algorithms such as AdaBoost and Bagging have been in active research and shown improvements in classification results for several benchmarking data sets with mainly decision trees as their base classifiers. In this paper we experiment to apply these Meta learning techniques with classifiers such as random forests, neural networks and support vector machines. The data sets are from MAGIC, a Cherenkov telescope experiment. The task is to classify gamma signals from overwhelmingly hadron and muon signals representing a rare class classification problem. We compare the individual classifiers with their ensemble counterparts and discuss the results. WEKA a wonderful tool for machine learning has been used for making the experiments.

Keywords: Ensembles, WEKA, Neural networks [NN], SupportVector Machines [SVM], Random Forests [RF].

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1728
1039 Probability Distribution of Rainfall Depth at Hourly Time-Scale

Authors: S. Dan'azumi, S. Shamsudin, A. A. Rahman

Abstract:

Rainfall data at fine resolution and knowledge of its characteristics plays a major role in the efficient design and operation of agricultural, telecommunication, runoff and erosion control as well as water quality control systems. The paper is aimed to study the statistical distribution of hourly rainfall depth for 12 representative stations spread across Peninsular Malaysia. Hourly rainfall data of 10 to 22 years period were collected and its statistical characteristics were estimated. Three probability distributions namely, Generalized Pareto, Exponential and Gamma distributions were proposed to model the hourly rainfall depth, and three goodness-of-fit tests, namely, Kolmogorov-Sminov, Anderson-Darling and Chi-Squared tests were used to evaluate their fitness. Result indicates that the east cost of the Peninsular receives higher depth of rainfall as compared to west coast. However, the rainfall frequency is found to be irregular. Also result from the goodness-of-fit tests show that all the three models fit the rainfall data at 1% level of significance. However, Generalized Pareto fits better than Exponential and Gamma distributions and is therefore recommended as the best fit.

Keywords: Goodness-of-fit test, Hourly rainfall, Malaysia, Probability distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2867
1038 An Inductive Coupling Based CMOS Wireless Powering Link for Implantable Biomedical Applications

Authors: Lei Yao, Jia Hao Cheong, Rui-Feng Xue, Minkyu Je

Abstract:

A closed-loop controlled wireless power transmission circuit block for implantable biomedical applications is described in this paper. The circuit consists of one front-end rectifier, power management sub-block including bandgap reference and low drop-out regulators (LDOs) as well as transmission power detection / feedback circuits. Simulation result shows that the front-end rectifier achieves 80% power efficiency with 750-mV single-end peak-to-peak input voltage and 1.28-V output voltage under load current of 4 mA. The power management block can supply 1.8mA average load current under 1V consuming only 12μW power, which is equivalent to 99.3% power efficiency. The wireless power transmission block described in this paper achieves a maximum power efficiency of 80%. The wireless power transmission circuit block is designed and implemented using UMC 65-nm CMOS/RF process. It occupies 1 mm × 1.2 mm silicon area.

Keywords: Implantable biomedical devices, wireless power transfer, LDO, rectifier, closed-loop power control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2243
1037 Classification of Extreme Ground-Level Ozone Based on Generalized Extreme Value Model for Air Monitoring Station

Authors: Siti Aisyah Zakaria, Nor Azrita Mohd Amin, Noor Fadhilah Ahmad Radi, Nasrul Hamidin

Abstract:

Higher ground-level ozone (GLO) concentration adversely affects human health, vegetations as well as activities in the ecosystem. In Malaysia, most of the analysis on GLO concentration are carried out using the average value of GLO concentration, which refers to the centre of distribution to make a prediction or estimation. However, analysis which focuses on the higher value or extreme value in GLO concentration is rarely explored. Hence, the objective of this study is to classify the tail behaviour of GLO using generalized extreme value (GEV) distribution estimation the return level using the corresponding modelling (Gumbel, Weibull, and Frechet) of GEV distribution. The results show that Weibull distribution which is also known as short tail distribution and considered as having less extreme behaviour is the best-fitted distribution for four selected air monitoring stations in Peninsular Malaysia, namely Larkin, Pelabuhan Kelang, Shah Alam, and Tanjung Malim; while Gumbel distribution which is considered as a medium tail distribution is the best-fitted distribution for Nilai station. The return level of GLO concentration in Shah Alam station is comparatively higher than other stations. Overall, return levels increase with increasing return periods but the increment depends on the type of the tail of GEV distribution’s tail. We conduct this study by using maximum likelihood estimation (MLE) method to estimate the parameters at four selected stations in Peninsular Malaysia. Next, the validation for the fitted block maxima series to GEV distribution is performed using probability plot, quantile plot and likelihood ratio test. Profile likelihood confidence interval is tested to verify the type of GEV distribution. These results are important as a guide for early notification on future extreme ozone events.

Keywords: Extreme value theory, generalized extreme value distribution, ground-level ozone, return level.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 453
1036 Thermoelastic Waves in Anisotropic Platesusing Normal Mode Expansion Method with Thermal Relaxation Time

Authors: K.L. Verma

Abstract:

Analysis for the generalized thermoelastic Lamb waves, which propagates in anisotropic thin plates in generalized thermoelasticity, is presented employing normal mode expansion method. The displacement and temperature fields are expressed by a summation of the symmetric and antisymmetric thermoelastic modes in the surface thermal stresses and thermal gradient free orthotropic plate, therefore the theory is particularly appropriate for waveform analyses of Lamb waves in thin anisotropic plates. The transient waveforms excited by the thermoelastic expansion are analyzed for an orthotropic thin plate. The obtained results show that the theory provides a quantitative analysis to characterize anisotropic thermoelastic stiffness properties of plates by wave detection. Finally numerical calculations have been presented for a NaF crystal, and the dispersion curves for the lowest modes of the symmetric and antisymmetric vibrations are represented graphically at different values of thermal relaxation time. However, the methods can be used for other materials as well

Keywords: Anisotropic, dispersion, frequency, normal, thermoelasticity, wave modes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
1035 A Rough Sets Approach for Relevant Internet/Web Online Searching

Authors: Erika Martinez Ramirez, Rene V. Mayorga

Abstract:

The internet is constantly expanding. Identifying web links of interest from web browsers requires users to visit each of the links listed, individually until a satisfactory link is found, therefore those users need to evaluate a considerable amount of links before finding their link of interest; this can be tedious and even unproductive. By incorporating web assistance, web users could be benefited from reduced time searching on relevant websites. In this paper, a rough set approach is presented, which facilitates classification of unlimited available e-vocabulary, to assist web users in reducing search times looking for relevant web sites. This approach includes two methods for identifying relevance data on web links based on the priority and percentage of relevance. As a result of these methods, a list of web sites is generated in priority sequence with an emphasis of the search criteria.

Keywords: Web search, Web Mining, Rough Sets, Web Intelligence, Intelligent Portals, Relevance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517