Search results for: objective function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10903

Search results for: objective function

10663 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis

Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia

Abstract:

Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.

Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation

Procedia PDF Downloads 34
10662 Optimal Trailing Edge Flap Positions of Helicopter Rotor for Various Thrust Coefficient to Solidity (Ct/σ) Ratios

Authors: K. K. Saijaand, K. Prabhakaran Nair

Abstract:

This study aims to determine change in optimal lo-cations of dual trailing-edge flaps for various thrust coefficient to solidity (Ct /σ) ratios of helicopter to achieve minimum hub vibration levels, with low penalty in terms of required trailing-edge flap control power. Polynomial response functions are used to approximate hub vibration and flap power objective functions. Single objective and multi-objective optimization is carried with the objective of minimizing hub vibration and flap power. The optimization results shows that the inboard flap location at low Ct/σ ratio move farther from the baseline value and at high Ct/σ ratio move towards the root of the blade for minimizing hub vibration.

Keywords: helicopter rotor, trailing-edge flap, thrust coefficient to solidity (Ct /σ) ratio, optimization

Procedia PDF Downloads 443
10661 Identification of Soft Faults in Branched Wire Networks by Distributed Reflectometry and Multi-Objective Genetic Algorithm

Authors: Soumaya Sallem, Marc Olivas

Abstract:

This contribution presents a method for detecting, locating, and characterizing soft faults in a complex wired network. The proposed method is based on multi-carrier reflectometry MCTDR (Multi-Carrier Time Domain Reflectometry) combined with a multi-objective genetic algorithm. In order to ensure complete network coverage and eliminate diagnosis ambiguities, the MCTDR test signal is injected at several points on the network, and the data is merged between different reflectometers (sensors) distributed on the network. An adapted multi-objective genetic algorithm is used to merge data in order to obtain more accurate faults location and characterization. The proposed method performances are evaluated from numerical and experimental results.

Keywords: wired network, reflectometry, network distributed diagnosis, multi-objective genetic algorithm

Procedia PDF Downloads 153
10660 Nonlinear Triad Interactions in Magnetohydrodynamic Plasma Turbulence

Authors: Yasser Rammah, Wolf-Christian Mueller

Abstract:

Nonlinear triad interactions in incompressible three-dimensional magnetohydrodynamic (3D-MHD) turbulence are studied by analyzing data from high-resolution direct numerical simulations of decaying isotropic (5123 grid points) and forced anisotropic (10242 x256 grid points) turbulence. An accurate numerical approach toward analyzing nonlinear turbulent energy transfer function and triad interactions is presented. It involves the direct numerical examination of every wavenumber triad that is associated with the nonlinear terms in the differential equations of MHD in the inertial range of turbulence. The technique allows us to compute the spectral energy transfer and energy fluxes, as well as the spectral locality property of energy transfer function. To this end, the geometrical shape of each underlying wavenumber triad that contributes to the statistical transfer density function is examined to infer the locality of the energy transfer. Results show that the total energy transfer is local via nonlocal triad interactions in decaying macroscopically isotropic MHD turbulence. In anisotropic MHD, turbulence subject to a strong mean magnetic field the nonlinear transfer is generally weaker and exhibits a moderate increase of nonlocality in both perpendicular and parallel directions compared to the isotropic case. These results support the recent mathematical findings, which also claim the locality of nonlinear energy transfer in MHD turbulence.

Keywords: magnetohydrodynamic (MHD) turbulence, transfer density function, locality function, direct numerical simulation (DNS)

Procedia PDF Downloads 351
10659 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study

Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin

Abstract:

Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.

Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)

Procedia PDF Downloads 571
10658 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction

Authors: Bastien Batardière, Joon Kwon

Abstract:

For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.

Keywords: convex optimization, variance reduction, adaptive algorithms, loopless

Procedia PDF Downloads 15
10657 The Predictive Implication of Executive Function and Language in Theory of Mind Development in Preschool Age Children

Authors: Michael Luc Andre, Célia Maintenant

Abstract:

Theory of mind is a milestone in child development which allows children to understand that others could have different mental states than theirs. Understanding the developmental stages of theory of mind in children leaded researchers on two Connected research problems. In one hand, the link between executive function and theory of mind, and on the other hand, the relationship of theory of mind and syntax processing. These two lines of research involved a great literature, full of important results, despite certain level of disagreement between researchers. For a long time, these two research perspectives continue to grow up separately despite research conclusion suggesting that the three variables should implicate same developmental period. Indeed, our goal was to study the relation between theory of mind, executive function, and language via a unique research question. It supposed that between executive function and language, one of the two variables could play a critical role in the relationship between theory of mind and the other variable. Thus, 112 children aged between three and six years old were recruited for completing a receptive and an expressive vocabulary task, a syntax understanding task, a theory of mind task, and three executive function tasks (inhibition, cognitive flexibility and working memory). The results showed significant correlations between performance on theory of mind task and performance on executive function domain tasks, except for cognitive flexibility task. We also found significant correlations between success on theory of mind task and performance in all language tasks. Multiple regression analysis justified only syntax and general abilities of language as possible predictors of theory of mind performance in our preschool age children sample. The results were discussed in the perspective of a great role of language abilities in theory of mind development. We also discussed possible reasons that could explain the non-significance of executive domains in predicting theory of mind performance, and the meaning of our results for the literature.

Keywords: child development, executive function, general language, syntax, theory of mind

Procedia PDF Downloads 22
10656 Fracture and Dynamic Behavior of Leaf Spring Suspension

Authors: S. Lecheb, A. Chellil, H. Mechakra, S. Attou, H. Kebir

Abstract:

Although leaf springs are one of the oldest suspension components they are still frequently used, especially in commercial vehicles. Being able to capture the leaf spring characteristics is of significant importance for vehicle handling dynamics studies. The main function of leaf spring is not only to support vertical load but also to isolate road induced vibrations. It is subjected to millions of load cycles leading to fatigue failure. It needs to have excellent fatigue life. The objective of this work is its use of Abaqus software to locate the most stressed areas and predict the areas in which it occurs in fatigue and crack of leaf spring and calculate the stress and frequencies of this model.

Keywords: leaf spring, crack, stress, natural frequencies

Procedia PDF Downloads 415
10655 Jensen's Inequality and M-Convex Functions

Authors: Yamin Sayyari

Abstract:

In this paper, we generalized the Jensen's inequality for m-convex functions and also we present a correction of Jensen's inequality which is a better than the generalization of this inequality for m-convex functions. Finally, we have found new lower and new upper bounds for Jensen's discrete inequality.

Keywords: Jensen's inequality, m-convex function, Convex function, Inequality

Procedia PDF Downloads 116
10654 An Application of Sinc Function to Approximate Quadrature Integrals in Generalized Linear Mixed Models

Authors: Altaf H. Khan, Frank Stenger, Mohammed A. Hussein, Reaz A. Chaudhuri, Sameera Asif

Abstract:

This paper discusses a novel approach to approximate quadrature integrals that arise in the estimation of likelihood parameters for the generalized linear mixed models (GLMM) as well as Bayesian methodology also requires computation of multidimensional integrals with respect to the posterior distributions in which computation are not only tedious and cumbersome rather in some situations impossible to find solutions because of singularities, irregular domains, etc. An attempt has been made in this work to apply Sinc function based quadrature rules to approximate intractable integrals, as there are several advantages of using Sinc based methods, for example: order of convergence is exponential, works very well in the neighborhood of singularities, in general quite stable and provide high accurate and double precisions estimates. The Sinc function based approach seems to be utilized first time in statistical domain to our knowledge, and it's viability and future scopes have been discussed to apply in the estimation of parameters for GLMM models as well as some other statistical areas.

Keywords: generalized linear mixed model, likelihood parameters, qudarature, Sinc function

Procedia PDF Downloads 366
10653 Aerodynamic Optimum Nose Shape Change of High-Speed Train by Design Variable Variation

Authors: Minho Kwak, Suhwan Yun, Choonsoo Park

Abstract:

Nose shape optimizations of high-speed train are performed for the improvement of aerodynamic characteristics. Based on the commercial train, KTX-Sancheon, multi-objective optimizations are conducted for the improvement of the side wind stability and the micro-pressure wave following the optimization for the reduction of aerodynamic drag. 3D nose shapes are modelled by the Vehicle Modeling Function. Aerodynamic drag and side wind stability are calculated by three-dimensional compressible Navier-Stokes solver, and micro pressure wave is done by axi-symmetric compressible Navier-Stokes solver. The Maxi-min Latin Hypercube Sampling method is used to extract sampling points to construct the approximation model. The kriging model is constructed for the approximation model and the NSGA-II algorithm was used as the multi-objective optimization algorithm. Nose length, nose tip height, and lower surface curvature are design variables. Because nose length is a dominant variable for aerodynamic characteristics of train nose, two optimization processes are progressed respectively with and without the design variable, nose length. Each pareto set was obtained and each optimized nose shape is selected respectively considering Honam high-speed rail line infrastructure in South Korea. Through the optimization process with the nose length, when compared to KTX Sancheon, aerodynamic drag was reduced by 9.0%, side wind stability was improved by 4.5%, micro-pressure wave was reduced by 5.4% whereas aerodynamic drag by 7.3%, side wind stability by 3.9%, micro-pressure wave by 3.9%, without the nose length. As a result of comparison between two optimized shapes, similar shapes are extracted other than the effect of nose length.

Keywords: aerodynamic characteristics, design variable, multi-objective optimization, train nose shape

Procedia PDF Downloads 317
10652 [Keynote Speech]: Bridge Damage Detection Using Frequency Response Function

Authors: Ahmed Noor Al-Qayyim

Abstract:

During the past decades, the bridge structures are considered very important portions of transportation networks, due to the fast urban sprawling. With the failure of bridges that under operating conditions lead to focus on updating the default bridge inspection methodology. The structures health monitoring (SHM) using the vibration response appeared as a promising method to evaluate the condition of structures. The rapid development in the sensors technology and the condition assessment techniques based on the vibration-based damage detection made the SHM an efficient and economical ways to assess the bridges. SHM is set to assess state and expects probable failures of designated bridges. In this paper, a presentation for Frequency Response function method that uses the captured vibration test information of structures to evaluate the structure condition. Furthermore, the main steps of the assessment of bridge using the vibration information are presented. The Frequency Response function method is applied to the experimental data of a full-scale bridge.

Keywords: bridge assessment, health monitoring, damage detection, frequency response function (FRF), signal processing, structure identification

Procedia PDF Downloads 314
10651 Optimal Sortation Strategy for a Distribution Network in an E-Commerce Supply Chain

Authors: Pankhuri Dagaonkar, Charumani Singh, Poornima Krothapalli, Krishna Karthik

Abstract:

The backbone of any retail e-commerce success story is a unique design of supply chain network, providing the business an unparalleled speed and scalability. Primary goal of the supply chain strategy is to meet customer expectation by offering fastest deliveries while keeping the cost minimal. Meeting this objective at the large market that India provides is the problem statement that we have targeted here. There are many models and optimization techniques focused on network design to identify the ideal facility location and size, optimizing cost and speed. In this paper we are presenting a tactical approach to optimize cost of an existing network for a predefined speed. We have considered both forward and reverse logistics of a retail e-commerce supply chain consisting of multiple fulfillment (warehouse) and delivery centers, which are connected via sortation nodes. The mathematical model presented here determines if the shipment from a node should get sorted directly for the last mile delivery center or it should travel as consolidated package to another node for further sortation (resort). The objective function minimizes the total cost by varying the resort percentages between nodes and provides the optimal resource allocation and number of sorts at each node.

Keywords: distribution strategy, mathematical model, network design, supply chain management

Procedia PDF Downloads 262
10650 The Impact of Self-Viewing in Virtual Teamwork on Team Creativity: The Mediating Effect of Objective Self-Awareness and the Moderating Effect of Psychological Safety

Authors: Xueyang Li

Abstract:

This thesis investigates the impact of self-viewing on team creativity in virtual teamwork and examines the role of objective self-awareness and psychological safety in this context. The study uses a quantitative research approach and collects data from 304 participants working in virtual teams. We hypothesized that observing oneself in online meetings would lead to a heightened sense of objective self and thus lower team creativity and that psychological safety would moderate their relationship. We tested these hypotheses in a laboratory experiment manipulating whether participants were able to observe themselves during the completion of an online team creativity task and manipulating whether participants were subjected to a psychological safety intervention. The results indicate that self-observation has a negative effect on team creativity in virtual teamwork, while objective self-awareness mediates this relationship, and psychological safety plays a moderating role. We discuss several aspects of the theoretical explanation of the findings. This study contributes to the existing literature by highlighting the importance of self-observation in virtual teamwork and provides practical implications for managers and team leaders to promote creativity in virtual teams.

Keywords: objective self-awareness, psychological safety, self-viewing, team creativity, virtual teamwork

Procedia PDF Downloads 55
10649 Efficient Subgoal Discovery for Hierarchical Reinforcement Learning Using Local Computations

Authors: Adrian Millea

Abstract:

In hierarchical reinforcement learning, one of the main issues encountered is the discovery of subgoal states or options (which are policies reaching subgoal states) by partitioning the environment in a meaningful way. This partitioning usually requires an expensive global clustering operation or eigendecomposition of the Laplacian of the states graph. We propose a local solution to this issue, much more efficient than algorithms using global information, which successfully discovers subgoal states by computing a simple function, which we call heterogeneity for each state as a function of its neighbors. Moreover, we construct a value function using the difference in heterogeneity from one step to the next, as reward, such that we are able to explore the state space much more efficiently than say epsilon-greedy. The same principle can then be applied to higher level of the hierarchy, where now states are subgoals discovered at the level below.

Keywords: exploration, hierarchical reinforcement learning, locality, options, value functions

Procedia PDF Downloads 129
10648 On the End-of-Life Inventory Problem

Authors: Hans Frenk, Sonya Javadi, Semih Onur Sezer

Abstract:

We consider the so-called end of life inventory problem for the supplier of a product in its final phase of the service life cycle. This phase starts when the production of the items stops and continues until the warranty of the last sold item expires. At the beginning of this phase, the supplier places a final order for spare parts to serve customers coming with defective items. At any time during the final phase, the supplier may also decide to switch to an alternative and more cost-effective policy. This alternative policy may be in the form of replacing a defective item with a substitutable product or offering discounts / rebates on new generation products. In this setup, the objective is to find a final order quantity and also a switching time which will minimize the total expected discounted cost. We study this problem under a general cost structure in a continuous-time framework where arrivals of defective items are given by a non-homogeneous Poisson process. We consider four formulations which differ by the nature of the switching time. These formulations are studied in detail and properties of the objective function are derived in each case. Using these properties, we provide exact algorithms for efficient numerical implementations. Numerical examples are provided illustrating the application of these algorithms. In these examples, we also compare the costs associated with these different formulations.

Keywords: End-of-life inventory control, martingales, optimization, service parts

Procedia PDF Downloads 295
10647 The Optimal Order Policy for the Newsvendor Model under Worker Learning

Authors: Sunantha Teyarachakul

Abstract:

We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.

Keywords: inventory management, Newsvendor model, order policy, worker learning

Procedia PDF Downloads 378
10646 Solving Fuzzy Multi-Objective Linear Programming Problems with Fuzzy Decision Variables

Authors: Mahnaz Hosseinzadeh, Aliyeh Kazemi

Abstract:

In this paper, a method is proposed for solving Fuzzy Multi-Objective Linear Programming problems (FMOLPP) with fuzzy right hand side and fuzzy decision variables. To illustrate the proposed method, it is applied to the problem of selecting suppliers for an automotive parts producer company in Iran in order to find the number of optimal orders allocated to each supplier considering the conflicting objectives. Finally, the obtained results are discussed.

Keywords: fuzzy multi-objective linear programming problems, triangular fuzzy numbers, fuzzy ranking, supplier selection problem

Procedia PDF Downloads 343
10645 Design Optimization of Doubly Fed Induction Generator Performance by Differential Evolution

Authors: Mamidi Ramakrishna Rao

Abstract:

Doubly-fed induction generators (DFIG) due to their advantages like speed variation and four-quadrant operation, find its application in wind turbines. DFIG besides supplying power to the grid has to support reactive power (kvar) under grid voltage variations, should contribute minimum fault current during faults, have high efficiency, minimum weight, adequate rotor protection during crow-bar-operation from +20% to -20% of rated speed.  To achieve the optimum performance, a good electromagnetic design of DFIG is required. In this paper, a simple and heuristic global optimization – Differential Evolution has been used. Variables considered are lamination details such as slot dimensions, stack diameters, air gap length, and generator stator and rotor stack length. Two operating conditions have been considered - voltage and speed variations. Constraints included were reactive power supplied to the grid and limiting fault current and torque. The optimization has been executed separately for three objective functions - maximum efficiency, weight reduction, and grid fault stator currents. Subsequent calculations led to the conclusion that designs determined through differential evolution help in determining an optimum electrical design for each objective function.

Keywords: design optimization, performance, DFIG, differential evolution

Procedia PDF Downloads 121
10644 Bayesian Optimization for Reaction Parameter Tuning: An Exploratory Study of Parameter Optimization in Oxidative Desulfurization of Thiophene

Authors: Aman Sharma, Sonali Sengupta

Abstract:

The study explores the utility of Bayesian optimization in tuning the physical and chemical parameters of reactions in an offline experimental setup. A comparative analysis of the influence of the acquisition function on the optimization performance is also studied. For proxy first and second-order reactions, the results are indifferent to the acquisition function used, whereas, while studying the parameters for oxidative desulphurization of thiophene in an offline setup, upper confidence bound (UCB) provides faster convergence along with a marginal trade-off in the maximum conversion achieved. The work also demarcates the critical number of independent parameters and input observations required for both sequential and offline reaction setups to yield tangible results.

Keywords: acquisition function, Bayesian optimization, desulfurization, kinetics, thiophene

Procedia PDF Downloads 145
10643 Analytical Design of Fractional-Order PI Controller for Decoupling Control System

Authors: Truong Nguyen Luan Vu, Le Hieu Giang, Le Linh

Abstract:

The FOPI controller is proposed based on the main properties of the decoupling control scheme, as well as the fractional calculus. By using the simplified decoupling technique, the transfer function of decoupled apparent process is firstly separated into a set of n equivalent independent processes in terms of a ratio of the diagonal elements of original open-loop transfer function to those of dynamic relative gain array and the fraction – order PI controller is then developed for each control loops due to the Bode’s ideal transfer function that gives the desired fractional closed-loop response in the frequency domain. The simulation studies were carried out to evaluate the proposed design approach in a fair compared with the other existing methods in accordance with the structured singular value (SSV) theory that used to measure the robust stability of control systems under multiplicative output uncertainty. The simulation results indicate that the proposed method consistently performs well with fast and well-balanced closed-loop time responses.

Keywords: ideal transfer function of bode, fractional calculus, fractional order proportional integral (FOPI) controller, decoupling control system

Procedia PDF Downloads 296
10642 A Modified NSGA-II Algorithm for Solving Multi-Objective Flexible Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk, Ozan Bahadir

Abstract:

NSGA-II is one of the most well-known and most widely used evolutionary algorithms. In addition to its new versions, such as NSGA-III, there are several modified types of this algorithm in the literature. In this paper, a hybrid NSGA-II algorithm has been suggested for solving the multi-objective flexible job shop scheduling problem. For a better search, new neighborhood-based crossover and mutation operators are defined. To create new generations, the neighbors of the selected individuals by the tournament selection are constructed. Also, at the end of each iteration, before sorting, neighbors of a certain number of good solutions are derived, except for solutions protected by elitism. The neighbors are generated using a constraint-based neural network that uses various constructs. The non-dominated sorting and crowding distance operators are same as the classic NSGA-II. A comparison based on some multi-objective benchmarks from the literature shows the efficiency of the algorithm.

Keywords: flexible job shop scheduling problem, multi-objective optimization, NSGA-II algorithm, neighborhood structures

Procedia PDF Downloads 188
10641 Wavelets Contribution on Textual Data Analysis

Authors: Habiba Ben Abdessalem

Abstract:

The emergence of giant set of textual data was the push that has encouraged researchers to invest in this field. The purpose of textual data analysis methods is to facilitate access to such type of data by providing various graphic visualizations. Applying these methods requires a corpus pretreatment step, whose standards are set according to the objective of the problem studied. This step determines the forms list contained in contingency table by keeping only those information carriers. This step may, however, lead to noisy contingency tables, so the use of wavelet denoising function. The validity of the proposed approach is tested on a text database that offers economic and political events in Tunisia for a well definite period.

Keywords: textual data, wavelet, denoising, contingency table

Procedia PDF Downloads 249
10640 Combined Odd Pair Autoregressive Coefficients for Epileptic EEG Signals Classification by Radial Basis Function Neural Network

Authors: Boukari Nassim

Abstract:

This paper describes the use of odd pair autoregressive coefficients (Yule _Walker and Burg) for the feature extraction of electroencephalogram (EEG) signals. In the classification: the radial basis function neural network neural network (RBFNN) is employed. The RBFNN is described by his architecture and his characteristics: as the RBF is defined by the spread which is modified for improving the results of the classification. Five types of EEG signals are defined for this work: Set A, Set B for normal signals, Set C, Set D for interictal signals, set E for ictal signal (we can found that in Bonn university). In outputs, two classes are given (AC, AD, AE, BC, BD, BE, CE, DE), the best accuracy is calculated at 99% for the combined odd pair autoregressive coefficients. Our method is very effective for the diagnosis of epileptic EEG signals.

Keywords: epilepsy, EEG signals classification, combined odd pair autoregressive coefficients, radial basis function neural network

Procedia PDF Downloads 316
10639 The Effects of Exercise Training on LDL Mediated Blood Flow in Coronary Artery Disease: A Systematic Review

Authors: Aziza Barnawi

Abstract:

Background: Regular exercise reduces risk factors associated with cardiovascular diseases. Over the past decade, exercise interventions have been introduced to reduce the risk of and prevent coronary artery disease (CAD). Elevated low-density lipoproteins (LDL) contribute to the formation of atherosclerosis, its manifestations on the endothelial narrow the coronary artery and affect the endothelial function. Therefore, flow-mediated dilation (FMD) technique is used to assess the function. The results of previous studies have been inconsistent and difficult to interpret across different types of exercise programs. The relationship between exercise therapy and lipid levels has been extensively studied, and it is known to improve the lipid profile and endothelial function. However, the effectiveness of exercise in altering LDL levels and improving blood flow is controversial. Objective: This review aims to explore the evidence and quantify the impact of exercise training on LDL levels and vascular function by FMD. Methods: Electronic databases were searched PubMed, Google Scholar, Web of Science, the Cochrane Library, and EBSCO using the keywords: “low and/or moderate aerobic training”, “blood flow”, “atherosclerosis”, “LDL mediated blood flow”, “Cardiac Rehabilitation”, “low-density lipoproteins”, “flow-mediated dilation”, “endothelial function”, “brachial artery flow-mediated dilation”, “oxidized low-density lipoproteins” and “coronary artery disease”. The studies were conducted for 6 weeks or more and influenced LDL levels and/or FMD. Studies with different intensity training and endurance training in healthy or CAD individuals were included. Results: Twenty-one randomized controlled trials (RCTs) (14 FMD and 7 LDL studies) with 776 participants (605 exercise participants and 171 control participants) met eligibility criteria and were included in the systematic review. Endurance training resulted in a greater reduction in LDL levels and their subfractions and a better FMD response. Overall, the training groups showed improved physical fitness status compared with the control groups. Participants whose exercise duration was ≥150 minutes /week had significant improvement in FMD and LDL levels compared with those with <150 minutes/week.Conclusion: In conclusion, although the relationship between physical training, LDL levels, and blood flow in CAD is complex and multifaceted, there are promising results for controlling primary and secondary prevention of CAD by exercise. Exercise training, including resistance, aerobic, and interval training, is positively correlated with improved FMD. However, the small body of evidence for LDL studies (resistance and interval training) did not prove to be significantly associated with improved blood flow. Increasing evidence suggests that exercise training is a promising adjunctive therapy to improve cardiovascular health, potentially improving blood flow and contributing to the overall management of CAD.

Keywords: exercise training, low density lipoprotein, flow mediated dilation, coronary artery disease

Procedia PDF Downloads 43
10638 Ill-Posed Inverse Problems in Molecular Imaging

Authors: Ranadhir Roy

Abstract:

Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.

Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method

Procedia PDF Downloads 242
10637 Rational Bureaucracy and E-Government: A Philosophical Study of Universality of E-Government

Authors: Akbar Jamali

Abstract:

Hegel is the first great political philosopher who specifically contemplates on bureaucracy. For Hegel bureaucracy is the function of the state. Since state, essentially is a rational organization, its function; namely, bureaucracy must be rational. Since, what is rational is universal; Hegel had to explain how the bureaucracy could be understood as universal. Hegel discusses bureaucracy in his treatment of ‘executive power’. He analyses modern bureaucracy as a form of political organization, its constituent members, and its relation to the social environment. Therefore, the essence of bureaucracy in Hegel’s philosophy is the implementation of law and rules. Hegel argues that unlike the other social classes that are particular because they look for their own private interest, bureaucracy as a class is a ‘universal’ because their orientation is the interest of the state. State for Hegel is essentially rational and universal. It is the actualization of ‘objective Spirit’. Marx criticizes Hegel’s argument on the universality of state and bureaucracy. For Marx state is equal to bureaucracy, it constitutes a social class that based on the interest of bourgeois class that dominates the society and exploits proletarian class. Therefore, the main disagreement between these political philosophers is: whether the state (bureaucracy) is universal or particular. Growing e-government in modern state as an important aspect of development leads us to contemplate on the particularity and universality of e-government. In this article, we will argue that e-government essentially is universal. E-government, in itself, is impartial; therefore, it cannot be particular. The development of e-government eliminates many side effects of the private, personal or particular interest of the individuals who work as bureaucracy. Finally, we will argue that more a state is developed more it is universal. Therefore, development of e-government makes the state a more universal and affects the modern philosophical debate on the particularity or universality of bureaucracy and state.

Keywords: particularity, universality, rational bureaucracy, impartiality

Procedia PDF Downloads 207
10636 Optimization of Air Pollution Control Model for Mining

Authors: Zunaira Asif, Zhi Chen

Abstract:

The sustainable measures on air quality management are recognized as one of the most serious environmental concerns in the mining region. The mining operations emit various types of pollutants which have significant impacts on the environment. This study presents a stochastic control strategy by developing the air pollution control model to achieve a cost-effective solution. The optimization method is formulated to predict the cost of treatment using linear programming with an objective function and multi-constraints. The constraints mainly focus on two factors which are: production of metal should not exceed the available resources, and air quality should meet the standard criteria of the pollutant. The applicability of this model is explored through a case study of an open pit metal mine, Utah, USA. This method simultaneously uses meteorological data as a dispersion transfer function to support the practical local conditions. The probabilistic analysis and the uncertainties in the meteorological conditions are accomplished by Monte Carlo simulation. Reasonable results have been obtained to select the optimized treatment technology for PM2.5, PM10, NOx, and SO2. Additional comparison analysis shows that baghouse is the least cost option as compared to electrostatic precipitator and wet scrubbers for particulate matter, whereas non-selective catalytical reduction and dry-flue gas desulfurization are suitable for NOx and SO2 reduction respectively. Thus, this model can aid planners to reduce these pollutants at a marginal cost by suggesting control pollution devices, while accounting for dynamic meteorological conditions and mining activities.

Keywords: air pollution, linear programming, mining, optimization, treatment technologies

Procedia PDF Downloads 160
10635 Functions of Public Policy in Private International Law

Authors: Fedorova Elena

Abstract:

In this article, we draw a distinction between two important functions of public policy in private international law. The first function is widely recognized and relates to the prevention of application of foreign laws and enforcement of foreign court judgments whenever their effects are incompatible with the domestic legal system of the forum. This effectively protects sovereign rights of the forum state as it allows to resist against the undesirable effects of foreign law-making and law-enforcement policies. The second function is less obvious, but not less important. As the internal private legal relationships, international private relationships are usually governed by rules of public policy, to which the parties can not derogate by mutual agreement. Thefore, for international private law relations public policy has a different function than previously mentioned: in this case, the public policy acts as a defense against unacceptable effects of the party autonomy. Thus, this second function of public policy consists in the limitation of the party autonomy wich effects would be unacceptable for the local legal system. In the frame of this second function the author will analyse two types of public policy which can limit the party autonomy: « substantial » public policy (which regulates the substance of international legal relationship) and « conflictual » public policy (which regulates the party autonomy to choose the law applicable for the substance of relationship). The author provides an analysis of these functions of the public policy in the field of international contract law because of the important role of the principle of party autonomy for international contract relations.

Keywords: public policy, general theory of private international law, substantial public policy, conflictual public policy

Procedia PDF Downloads 542
10634 A Perspective on Teaching Mathematical Concepts to Freshman Economics Students Using 3D-Visualisations

Authors: Muhammad Saqib Manzoor, Camille Dickson-Deane, Prashan Karunaratne

Abstract:

Cobb-Douglas production (utility) function is a fundamental function widely used in economics teaching and research. The key reason is the function's characteristics to describe the actual production using inputs like labour and capital. The characteristics of the function like returns to scale, marginal, and diminishing marginal productivities are covered in the introductory units in both microeconomics and macroeconomics with a 2-dimensional static visualisation of the function. However, less insight is provided regarding three-dimensional surface, changes in the curvature properties due to returns to scale, the linkage of the short-run production function with its long-run counterpart and marginal productivities, the level curves, and the constraint optimisation. Since (freshman) learners have diverse prior knowledge and cognitive skills, the existing “one size fits all” approach is not very helpful. The aim of this study is to bridge this gap by introducing technological intervention with interactive animations of the three-dimensional surface and sequential unveiling of the characteristics mentioned above using Python software. A small classroom intervention has helped students enhance their analytical and visualisation skills towards active and authentic learning of this topic. However, to authenticate the strength of our approach, a quasi-Delphi study will be conducted to ask domain-specific experts, “What value to the learning process in economics is there using a 2-dimensional static visualisation compared to using a 3-dimensional dynamic visualisation?’ Here three perspectives of the intervention were reviewed by a panel comprising of novice students, experienced students, novice instructors, and experienced instructors in an effort to determine the learnings from each type of visualisations within a specific domain of knowledge. The value of this approach is key to suggesting different pedagogical methods which can enhance learning outcomes.

Keywords: cobb-douglas production function, quasi-Delphi method, effective teaching and learning, 3D-visualisations

Procedia PDF Downloads 110