Search results for: mathematical programming problems with equilibrium constraints
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9954

Search results for: mathematical programming problems with equilibrium constraints

9594 Quality-Of-Service-Aware Green Bandwidth Allocation in Ethernet Passive Optical Network

Authors: Tzu-Yang Lin, Chuan-Ching Sue

Abstract:

Sleep mechanisms are commonly used to ensure the energy efficiency of each optical network unit (ONU) that concerns a single class delay constraint in the Ethernet Passive Optical Network (EPON). How long the ONUs can sleep without violating the delay constraint has become a research problem. Particularly, we can derive an analytical model to determine the optimal sleep time of ONUs in every cycle without violating the maximum class delay constraint. The bandwidth allocation considering such optimal sleep time is called Green Bandwidth Allocation (GBA). Although the GBA mechanism guarantees that the different class delay constraints do not violate the maximum class delay constraint, packets with a more relaxed delay constraint will be treated as those with the most stringent delay constraint and may be sent early. This means that the ONU will waste energy in active mode to send packets in advance which did not need to be sent at the current time. Accordingly, we proposed a QoS-aware GBA using a novel intra-ONU scheduling to control the packets to be sent according to their respective delay constraints, thereby enhancing energy efficiency without deteriorating delay performance. If packets are not explicitly classified but with different packet delay constraints, we can modify the intra-ONU scheduling to classify packets according to their packet delay constraints rather than their classes. Moreover, we propose the switchable ONU architecture in which the ONU can switch the architecture according to the sleep time length, thus improving energy efficiency in the QoS-aware GBA. The simulation results show that the QoS-aware GBA ensures that packets in different classes or with different delay constraints do not violate their respective delay constraints and consume less power than the original GBA.

Keywords: Passive Optical Networks, PONs, Optical Network Unit, ONU, energy efficiency, delay constraint

Procedia PDF Downloads 270
9593 Modeling of Particle Reduction and Volatile Compounds Profile during Chocolate Conching by Electronic Nose and Genetic Programming (GP) Based System

Authors: Juzhong Tan, William Kerr

Abstract:

Conching is one critical procedure in chocolate processing, where special flavors are developed, and smooth mouse feel the texture of the chocolate is developed due to particle size reduction of cocoa mass and other additives. Therefore, determination of the particle size and volatile compounds profile of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products. Currently, precise particle size measurement is usually done by laser scattering which is expensive and inaccessible to small/medium size chocolate manufacturers. Also, some other alternatives, such as micrometer and microscopy, can’t provide good measurements and provide little information. Volatile compounds analysis of cocoa during conching, has similar problems due to its high cost and limited accessibility. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was inserted to a conching machine and was used to monitoring the volatile compound profile of chocolate during the conching. A model correlated volatile compounds profiles along with factors including the content of cocoa, sugar, and the temperature during the conching to particle size of chocolate particles by genetic programming was established. The model was used to predict the particle size reduction of chocolates with different cocoa mass to sugar ratio (1:2, 1:1, 1.5:1, 2:1) at 8 conching time (15min, 30min, 1h, 1.5h, 2h, 4h, 8h, and 24h). And the predictions were compared to laser scattering measurements of the same chocolate samples. 91.3% of the predictions were within the range of later scatting measurement ± 5% deviation. 99.3% were within the range of later scatting measurement ± 10% deviation.

Keywords: cocoa bean, conching, electronic nose, genetic programming

Procedia PDF Downloads 241
9592 Kinetics, Equilibrium and Thermodynamics of the Adsorption of Triphenyltin onto NanoSiO₂/Fly Ash/Activated Carbon Composite

Authors: Olushola S. Ayanda, Olalekan S. Fatoki, Folahan A. Adekola, Bhekumusa J. Ximba, Cecilia O. Akintayo

Abstract:

In the present study, the kinetics, equilibrium and thermodynamics of the adsorption of triphenyltin (TPT) from TPT-contaminated water onto nanoSiO2/fly ash/activated carbon composite was investigated in batch adsorption system. Equilibrium adsorption data were analyzed using Langmuir, Freundlich, Temkin and Dubinin–Radushkevich (D-R) isotherm models. Pseudo first- and second-order, Elovich and fractional power models were applied to test the kinetic data and in order to understand the mechanism of adsorption, thermodynamic parameters such as ΔG°, ΔSo and ΔH° were also calculated. The results showed a very good compliance with pseudo second-order equation while the Freundlich and D-R models fit the experiment data. Approximately 99.999 % TPT was removed from the initial concentration of 100 mg/L TPT at 80oC, contact time of 60 min, pH 8 and a stirring speed of 200 rpm. Thus, nanoSiO2/fly ash/activated carbon composite could be used as effective adsorbent for the removal of TPT from contaminated water and wastewater.

Keywords: isotherm, kinetics, nanoSiO₂/fly ash/activated carbon composite, tributyltin

Procedia PDF Downloads 285
9591 Thorium Extraction with Cyanex272 Coated Magnetic Nanoparticles

Authors: Afshin Shahbazi, Hadi Shadi Naghadeh, Ahmad Khodadadi Darban

Abstract:

In the Magnetically Assisted Chemical Separation (MACS) process, tiny ferromagnetic particles coated with solvent extractant are used to selectively separate radionuclides and hazardous metals from aqueous waste streams. The contaminant-loaded particles are then recovered from the waste solutions using a magnetic field. In the present study, Cyanex272 or C272 (bis (2,4,4-trimethylpentyl) phosphinic acid) coated magnetic particles are being evaluated for the possible application in the extraction of Thorium (IV) from nuclear waste streams. The uptake behaviour of Th(IV) from nitric acid solutions was investigated by batch studies. Adsorption of Thorium (IV) from aqueous solution onto adsorbent was investigated in a batch system. Adsorption isotherm and adsorption kinetic studies of Thorium (IV) onto nanoparticles coated Cyanex272 were carried out in a batch system. The factors influencing Thorium (IV) adsorption were investigated and described in detail, as a function of the parameters such as initial pH value, contact time, adsorbent mass, and initial Thorium (IV) concentration. Magnetically Assisted Chemical Separation (MACS) process adsorbent showed best results for the fast adsorption of Th (IV) from aqueous solution at aqueous phase acidity value of 0.5 molar. In addition, more than 80% of Th (IV) was removed within the first 2 hours, and the time required to achieve the adsorption equilibrium was only 140 minutes. Langmuir and Frendlich adsorption models were used for the mathematical description of the adsorption equilibrium. Equilibrium data agreed very well with the Langmuir model, with a maximum adsorption capacity of 48 mg.g-1. Adsorption kinetics data were tested using pseudo-first-order, pseudo-second-order and intra-particle diffusion models. Kinetic studies showed that the adsorption followed a pseudo-second-order kinetic model, indicating that the chemical adsorption was the rate-limiting step.

Keywords: Thorium (IV) adsorption, MACS process, magnetic nanoparticles, Cyanex272

Procedia PDF Downloads 322
9590 A Combined AHP-GP Model for Selecting Knowledge Management Tool

Authors: Ahmad Sarfaraz, Raiyad Herwies

Abstract:

In this paper, a multi-criteria decision making analysis is used to help any organization selects the best KM tool that fits and serves its needs. The AHP model is used based on a previous study to highlight and identify the main criteria and sub-criteria that are incorporated in the selection process. Different KM tools alternatives with different criteria are compared and weighted accurately to be incorporated in the GP model. The main goal is to combine the GP model with the AHP model to ensure that selecting the KM tool considers the resource constraints. Two important issues are discussed in this paper: how different factors could be taken into consideration in forming the AHP model, and how to incorporate the AHP results into the GP model for better results.

Keywords: knowledge management, analytical hierarchy process, goal programming, multi-criteria decision making

Procedia PDF Downloads 373
9589 Modified Bat Algorithm for Economic Load Dispatch Problem

Authors: Daljinder Singh, J.S.Dhillon, Balraj Singh

Abstract:

According to no free lunch theorem, a single search technique cannot perform best in all conditions. Optimization method can be attractive choice to solve optimization problem that may have exclusive advantages like robust and reliable performance, global search capability, little information requirement, ease of implementation, parallelism, no requirement of differentiable and continuous objective function. In order to synergize between exploration and exploitation and to further enhance the performance of Bat algorithm, the paper proposed a modified bat algorithm that adds additional search procedure based on bat’s previous experience. The proposed algorithm is used for solving the economic load dispatch (ELD) problem. The practical constraint such valve-point loading along with power balance constraints and generator limit are undertaken. To take care of power demand constraint variable elimination method is exploited. The proposed algorithm is tested on various ELD problems. The results obtained show that the proposed algorithm is capable of performing better in majority of ELD problems considered and is at par with existing algorithms for some of problems.

Keywords: bat algorithm, economic load dispatch, penalty method, variable elimination method

Procedia PDF Downloads 451
9588 Vendor Selection and Supply Quotas Determination by Using Revised Weighting Method and Multi-Objective Programming Methods

Authors: Tunjo Perič, Marin Fatović

Abstract:

In this paper a new methodology for vendor selection and supply quotas determination (VSSQD) is proposed. The problem of VSSQD is solved by the model that combines revised weighting method for determining the objective function coefficients, and a multiple objective linear programming (MOLP) method based on the cooperative game theory for VSSQD. The criteria used for VSSQD are: (1) purchase costs and (2) product quality supplied by individual vendors. The proposed methodology is tested on the example of flour purchase for a bakery with two decision makers.

Keywords: cooperative game theory, multiple objective linear programming, revised weighting method, vendor selection

Procedia PDF Downloads 342
9587 Some Pertinent Issues and Considerations on CBSE

Authors: Anil Kumar Tripathi, Ratneshwer Gupta

Abstract:

All the software engineering researches and best industry practices aim at providing software products with high degree of quality and functionality at low cost and less time. These requirements are addressed by the Component Based Software Engineering (CBSE) as well. CBSE, which deals with the software construction by components’ assembly, is a revolutionary extension of Software Engineering. CBSE must define and describe processes to assure timely completion of high quality software systems that are composed of a variety of pre built software components. Though these features provide distinct and visible benefits in software design and programming, they also raise some challenging problems. The aim of this work is to summarize the pertinent issues and considerations in CBSE to make an understanding in forms of concepts and observations that may lead to development of newer ways of dealing with the problems and challenges in CBSE.

Keywords: software component, component based software engineering, software process, testing, maintenance

Procedia PDF Downloads 387
9586 Solving Definition and Relation Problems in English Navigation Terminology

Authors: Ayşe Yurdakul, Eckehard Schnieder

Abstract:

Because of the growing multidisciplinarity and multilinguality, communication problems in different technical fields grows more and more. Therefore, each technical field has its own specific language, terminology which is characterised by the different definition of terms. In addition to definition problems, there are also relation problems between terms. Among these problems of relation, there are the synonymy, antonymy, hypernymy/hyponymy, ambiguity, risk of confusion, and translation problems etc. Thus, the terminology management system iglos of the Institute for Traffic Safety and Automation Engineering of the Technische Universität Braunschweig has the target to solve these problems by a methodological standardisation of term definitions with the aid of the iglos sign model and iglos relation types. The focus of this paper should be on solving definition and relation problems between terms in English navigation terminology.

Keywords: iglos, iglos sign model, methodological resolutions, navigation terminology, common language, technical language, positioning, definition problems, relation problems

Procedia PDF Downloads 319
9585 Fuzzy Multi-Objective Approach for Emergency Location Transportation Problem

Authors: Bidzina Matsaberidze, Anna Sikharulidze, Gia Sirbiladze, Bezhan Ghvaberidze

Abstract:

In the modern world emergency management decision support systems are actively used by state organizations, which are interested in extreme and abnormal processes and provide optimal and safe management of supply needed for the civil and military facilities in geographical areas, affected by disasters, earthquakes, fires and other accidents, weapons of mass destruction, terrorist attacks, etc. Obviously, these kinds of extreme events cause significant losses and damages to the infrastructure. In such cases, usage of intelligent support technologies is very important for quick and optimal location-transportation of emergency service in order to avoid new losses caused by these events. Timely servicing from emergency service centers to the affected disaster regions (response phase) is a key task of the emergency management system. Scientific research of this field takes the important place in decision-making problems. Our goal was to create an expert knowledge-based intelligent support system, which will serve as an assistant tool to provide optimal solutions for the above-mentioned problem. The inputs to the mathematical model of the system are objective data, as well as expert evaluations. The outputs of the system are solutions for Fuzzy Multi-Objective Emergency Location-Transportation Problem (FMOELTP) for disasters’ regions. The development and testing of the Intelligent Support System were done on the example of an experimental disaster region (for some geographical zone of Georgia) which was generated using a simulation modeling. Four objectives are considered in our model. The first objective is to minimize an expectation of total transportation duration of needed products. The second objective is to minimize the total selection unreliability index of opened humanitarian aid distribution centers (HADCs). The third objective minimizes the number of agents needed to operate the opened HADCs. The fourth objective minimizes the non-covered demand for all demand points. Possibility chance constraints and objective constraints were constructed based on objective-subjective data. The FMOELTP was constructed in a static and fuzzy environment since the decisions to be made are taken immediately after the disaster (during few hours) with the information available at that moment. It is assumed that the requests for products are estimated by homeland security organizations, or their experts, based upon their experience and their evaluation of the disaster’s seriousness. Estimated transportation times are considered to take into account routing access difficulty of the region and the infrastructure conditions. We propose an epsilon-constraint method for finding the exact solutions for the problem. It is proved that this approach generates the exact Pareto front of the multi-objective location-transportation problem addressed. Sometimes for large dimensions of the problem, the exact method requires long computing times. Thus, we propose an approximate method that imposes a number of stopping criteria on the exact method. For large dimensions of the FMOELTP the Estimation of Distribution Algorithm’s (EDA) approach is developed.

Keywords: epsilon-constraint method, estimation of distribution algorithm, fuzzy multi-objective combinatorial programming problem, fuzzy multi-objective emergency location/transportation problem

Procedia PDF Downloads 308
9584 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 10
9583 Optimal Planning of Transmission Line Charging Mode During Black Start of a Hydroelectric Unit

Authors: Mohammad Reza Esmaili

Abstract:

After the occurrence of blackouts, the most important subject is how fast the electric service is restored. Power system restoration is an immensely complex issue and there should be a plan to be executed within the shortest time period. This plan has three main stages of black start, network reconfiguration and load restoration. In the black start stage, operators and experts may face several problems, for instance, the unsuccessful connection of the long high-voltage transmission line connected to the electrical source. In this situation, the generator may be tripped because of the unsuitable setting of its line charging mode or high absorbed reactive power. In order to solve this problem, the line charging process is defined as a nonlinear programming problem, and it is optimized by using GAMS software in this paper. The optimized process is performed on a grid that includes a 250 MW hydroelectric unit and a 400 KV transmission system. Simulations and field test results show the effectiveness of optimal planning.

Keywords: power system restoration, black start, line charging mode, nonlinear programming

Procedia PDF Downloads 65
9582 Integrated Approach of Quality Function Deployment, Sensitivity Analysis and Multi-Objective Linear Programming for Business and Supply Chain Programs Selection

Authors: T. T. Tham

Abstract:

The aim of this study is to propose an integrated approach to determine the most suitable programs, based on Quality Function Deployment (QFD), Sensitivity Analysis (SA) and Multi-Objective Linear Programming model (MOLP). Firstly, QFD is used to determine business requirements and transform them into business and supply chain programs. From the QFD, technical scores of all programs are obtained. All programs are then evaluated through five criteria (productivity, quality, cost, technical score, and feasibility). Sets of weight of these criteria are built using Sensitivity Analysis. Multi-Objective Linear Programming model is applied to select suitable programs according to multiple conflicting objectives under a budget constraint. A case study from the Sai Gon-Mien Tay Beer Company is given to illustrate the proposed methodology. The outcome of the study provides a comprehensive picture for companies to select suitable programs to obtain the optimal solution according to their preference.

Keywords: business program, multi-objective linear programming model, quality function deployment, sensitivity analysis, supply chain management

Procedia PDF Downloads 109
9581 A Theoretical Approach on Electoral Competition, Lobby Formation and Equilibrium Policy Platforms

Authors: Deepti Kohli, Meeta Keswani Mehra

Abstract:

The paper develops a theoretical model of electoral competition with purely opportunistic candidates and a uni-dimensional policy using the probability voting approach while focusing on the aspect of lobby formation to analyze the inherent complex interactions between centripetal and centrifugal forces and their effects on equilibrium policy platforms. There exist three types of agents, namely, Left-wing, Moderate and Right-wing who comprise of the total voting population. Also, it is assumed that the Left and Right agents are free to initiate a lobby of their choice. If initiated, these lobbies generate donations which in turn can be contributed to one (or both) electoral candidates in order to influence them to implement the lobby’s preferred policy. Four different lobby formation scenarios have been considered: no lobby formation, only Left, only Right and both Left and Right. The equilibrium policy platforms, amount of individual donations by agents to their respective lobbies and the contributions offered to the electoral candidates have been solved for under each of the above four cases. Since it is assumed that the agents cannot coordinate each other’s actions during the lobby formation stage, there exists a probability with which a lobby would be formed, which is also solved for in the model. The results indicate that the policy platforms of the two electoral candidates converge completely under the cases of no lobby and both (extreme) formations but diverge under the cases of only one (Left or Right) lobby formation. This is because in the case of no lobby being formed, only the centripetal forces (emerging from the election-winning aspect) are present while in the case of both extreme (Left-wing and Right-wing) lobbies being formed, centrifugal forces (emerging from the lobby formation aspect) also arise but cancel each other out, again resulting in a pure policy convergence phenomenon. In contrast, in case of only one lobby being formed, both centripetal and centrifugal forces interact strategically, leading the two electoral candidates to choose completely different policy platforms in equilibrium. Additionally, it is found that in equilibrium, while the donation by a specific agent type increases with the formation of both lobbies in comparison to when only one lobby is formed, the probability of implementation of the policy being advocated by that lobby group falls.

Keywords: electoral competition, equilibrium policy platforms, lobby formation, opportunistic candidates

Procedia PDF Downloads 321
9580 Supplier Selection and Order Allocation Using a Stochastic Multi-Objective Programming Model and Genetic Algorithm

Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh

Abstract:

In this paper, we develop a supplier selection and order allocation multi-objective model in stochastic environment in which purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. To do so, we use dependent chance programming (DCP) that maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. After transforming the above mentioned stochastic multi-objective programming problem into a stochastic single objective problem using minimum deviation method, we apply a genetic algorithm to get the later single objective problem solved. The employed genetic algorithm performs a simulation process in order to calculate the stochastic objective function as its fitness function. At the end, we explore the impact of stochastic parameters on the given solution via a sensitivity analysis exploiting coefficient of variation. The results show that as stochastic parameters have greater coefficients of variation, the value of objective function in the stochastic single objective programming problem is worsened.

Keywords: dependent chance programming, genetic algorithm, minimum deviation method, order allocation, supplier selection

Procedia PDF Downloads 247
9579 Application of Genetic Programming for Evolution of Glass-Forming Ability Parameter

Authors: Manwendra Kumar Tripathi, Subhas Ganguly

Abstract:

A few glass forming ability expressions in terms of characteristic temperatures have been proposed in the literature. Attempts have been made to correlate the expression with the critical diameter of the bulk metallic glass composition. However, with the advent of new alloys, many exceptions have been noted and reported. In the present approach, a genetic programming based code which generates an expression in terms of input variables, i.e., three characteristic temperatures viz. glass transition temperature (Tg), onset crystallization temperature (Tx) and offset temperature of melting (Tl) with maximum correlation with a critical diameter (Dmax). The expression evolved shows improved correlation with the critical diameter. In addition, the expression can be explained on the basis of time-temperature transformation curve.

Keywords: glass forming ability, genetic programming, bulk metallic glass, critical diameter

Procedia PDF Downloads 323
9578 Architecture for QoS Based Service Selection Using Local Approach

Authors: Gopinath Ganapathy, Chellammal Surianarayanan

Abstract:

Services are growing rapidly and generally they are aggregated into a composite service to accomplish complex business processes. There may be several services that offer the same required function of a particular task in a composite service. Hence a choice has to be made for selecting suitable services from alternative functionally similar services. Quality of Service (QoS)plays as a discriminating factor in selecting which component services should be selected to satisfy the quality requirements of a user during service composition. There are two categories of approaches for QoS based service selection, namely global and local approaches. Global approaches are known to be Non-Polynomial (NP) hard in time and offer poor scalability in large scale composition. As an alternative to global methods, local selection methods which reduce the search space by breaking up the large/complex problem of selecting services for the workflow into independent sub problems of selecting services for individual tasks are coming up. In this paper, distributed architecture for selecting services based on QoS using local selection is presented with an overview of local selection methodology. The architecture describes the core components, namely, selection manager and QoS manager needed to implement the local approach and their functions. Selection manager consists of two components namely constraint decomposer which decomposes the given global or workflow level constraints in local or task level constraints and service selector which selects appropriate service for each task with maximum utility, satisfying the corresponding local constraints. QoS manager manages the QoS information at two levels namely, service class level and individual service level. The architecture serves as an implementation model for local selection.

Keywords: architecture of service selection, local method for service selection, QoS based service selection, approaches for QoS based service selection

Procedia PDF Downloads 418
9577 Trajectory Optimization for Autonomous Deep Space Missions

Authors: Anne Schattel, Mitja Echim, Christof Büskens

Abstract:

Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.

Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.

Procedia PDF Downloads 402
9576 Generalized Limit Equilibrium Solution for the Lateral Pile Capacity Problem

Authors: Tomer Gans-Or, Shmulik Pinkert

Abstract:

The determination of lateral pile capacity per unit length is a key aspect in geotechnical engineering. Traditional approaches for assessing piles lateral capacity in cohesive soils involve the application of upper-bound and lower-bound plasticity theorems. However, a comprehensive solution encompassing the entire spectrum of soil strength parameters, particularly in frictional soils with or without cohesion, is still lacking. This research introduces an innovative implementation of the slice method limit equilibrium solution for lateral capacity assessment. For any given numerical discretization of the soil's domain around the pile, the lateral capacity evaluation is based on mobilized strength concept. The critical failure geometry is then found by a unique optimization procedure which includes both factor of safety minimization and geometrical optimization. The robustness of this suggested methodology is that the solution is independent of any predefined assumptions. Validation of the solution is accomplished through a comparison with established plasticity solutions for cohesive soils. Furthermore, the study demonstrates the applicability of the limit equilibrium method to address unresolved cases related to frictional and cohesive-frictional soils. Beyond providing capacity values, the method enables the utilization of the mobilized strength concept to generate safety-factor distributions for scenarios representing pre-failure states.

Keywords: lateral pile capacity, slice method, limit equilibrium, mobilized strength

Procedia PDF Downloads 44
9575 Integer Programming Model for the Network Design Problem with Facility Dependent Shortest Path Routing

Authors: Taehan Lee

Abstract:

We consider a network design problem which has shortest routing restriction based on the values determined by the installed facilities on each arc. In conventional multicommodity network design problem, a commodity can be routed through any possible path when the capacity is available. But, we consider a problem in which the commodity between two nodes must be routed on a path which has shortest metric value and the link metric value is determined by the installed facilities on the link. By this routing restriction, the problem has a distinct characteristic. We present an integer programming formulation containing the primal-dual optimality conditions to the shortest path routing. We give some computational results for the model.

Keywords: integer programming, multicommodity network design, routing, shortest path

Procedia PDF Downloads 410
9574 Genetic Algorithm and Multi-Parametric Programming Based Cascade Control System for Unmanned Aerial Vehicles

Authors: Dao Phuong Nam, Do Trong Tan, Pham Tam Thanh, Le Duy Tung, Tran Hoang Anh

Abstract:

This paper considers the problem of cascade control system for unmanned aerial vehicles (UAVs). Due to the complicated modelling technique of UAV, it is necessary to separate them into two subsystems. The proposed cascade control structure is a hierarchical scheme including a robust control for inner subsystem based on H infinity theory and trajectory generator using genetic algorithm (GA), outer loop control law based on multi-parametric programming (MPP) technique to overcome the disadvantage of a big amount of calculations. Simulation results are presented to show that the equivalent path has been found and obtained by proposed cascade control scheme.

Keywords: genetic algorithm, GA, H infinity, multi-parametric programming, MPP, unmanned aerial vehicles, UAVs

Procedia PDF Downloads 203
9573 Solving Nonconvex Economic Load Dispatch Problem Using Particle Swarm Optimization with Time Varying Acceleration Coefficients

Authors: Alireza Alizadeh, Hossein Ghadimi, Oveis Abedinia, Noradin Ghadimi

Abstract:

A Particle Swarm Optimization with Time Varying Acceleration Coefficients (PSO-TVAC) is proposed to determine optimal economic load dispatch (ELD) problem in this paper. The proposed methodology easily takes care of solving non-convex economic load dispatch problems along with different constraints like transmission losses, dynamic operation constraints and prohibited operating zones. The proposed approach has been implemented on the 3-machines 6-bus, IEEE 5-machines 14-bus, IEEE 6-machines 30-bus systems and 13 thermal units power system. The proposed technique is compared to solve the ELD problem with hybrid approach by using the valve-point effect. The comparison results prove the capability of the proposed method giving significant improvements in the generation cost for the economic load dispatch problem.

Keywords: PSO-TVAC, economic load dispatch, non-convex cost function, prohibited operating zone, transmission losses

Procedia PDF Downloads 380
9572 Seasonal Variation of the Unattached Fraction and Equilibrium Factor of ²²²Rn, ²²⁰Rn

Authors: Rajan Jakhu, Rohit Mehra

Abstract:

Radon (²²²Rn) and its decay products are the major sources of natural radiation exposure to general population. The activity concentrations of radon, thoron gasses, and their unattached and attached short-lived progeny in indoor environment of the Jaipur and Ajmer districts of Rajasthan had been calculated via passive measurements using the Pinhole cup dosimeter, deposition based progeny sensors (DRPS/DTPS) and wire mesh capped (DRPS/DTPS) progeny sensors. The results of this study revealed that radon and thoron concentrations (CRn, CTn) are highest in the winter season. The variation of the radon and its decay products are observed to vary seasonally, but these environmental parameters seem not to be affecting the thoron and its decay product concentrations in a regular manner. The average values of the radon and its decay products are maximum in winter and minimum in summer. The equilibrium factor for radon is observed to be 0.50, 0.47 and 0.49 in winter, rainy and summer seasons. The annual average value of the unattached fraction of the radon progeny comes out to be 0.34. On the other hand, the average value of thoron (²²⁰Rn) concentration and its equilibrium factor in the studied area comes to be 74, 39, 45 Bq m⁻³ and 0.07, 0.11, 0.07 respectively for the winter, rainy and summer seasons with the annual average value of the unattached fraction of about 0.18. The annual average radiological dose from exposure to indoor radon and thoron progeny comes out to be 0.88 and 0.78 mSv.

Keywords: equilibrium factor, radon, seasonal variation, thoron, unattached fraction

Procedia PDF Downloads 299
9571 Problems of Boolean Reasoning Based Biclustering Parallelization

Authors: Marcin Michalak

Abstract:

Biclustering is the way of two-dimensional data analysis. For several years it became possible to express such issue in terms of Boolean reasoning, for processing continuous, discrete and binary data. The mathematical backgrounds of such approach — proved ability of induction of exact and inclusion–maximal biclusters fulfilling assumed criteria — are strong advantages of the method. Unfortunately, the core of the method has quite high computational complexity. In the paper the basics of Boolean reasoning approach for biclustering are presented. In such context the problems of computation parallelization are risen.

Keywords: Boolean reasoning, biclustering, parallelization, prime implicant

Procedia PDF Downloads 113
9570 Automated Test Data Generation For some types of Algorithm

Authors: Hitesh Tahbildar

Abstract:

The cost of test data generation for a program is computationally very high. In general case, no algorithm to generate test data for all types of algorithms has been found. The cost of generating test data for different types of algorithm is different. Till date, people are emphasizing the need to generate test data for different types of programming constructs rather than different types of algorithms. The test data generation methods have been implemented to find heuristics for different types of algorithms. Some algorithms that includes divide and conquer, backtracking, greedy approach, dynamic programming to find the minimum cost of test data generation have been tested. Our experimental results say that some of these types of algorithm can be used as a necessary condition for selecting heuristics and programming constructs are sufficient condition for selecting our heuristics. Finally we recommend the different heuristics for test data generation to be selected for different types of algorithms.

Keywords: ongest path, saturation point, lmax, kL, kS

Procedia PDF Downloads 392
9569 Taguchi Method for Analyzing a Flexible Integrated Logistics Network

Authors: E. Behmanesh, J. Pannek

Abstract:

Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.

Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method

Procedia PDF Downloads 179
9568 A Refinement Strategy Coupling Event-B and Planning Domain Definition Language (PDDL) for Planning Problems

Authors: Sabrine Ammar, Mohamed Tahar Bhiri

Abstract:

Automatic planning has a de facto standard language called Planning Domain Definition Language (PDDL) for describing planning problems. It aims to formalize the planning problems described by the concept of state space. PDDL-related dynamic analysis tools, namely planners and validators, are insufficient for verifying and validating PDDL descriptions. Indeed, these tools made it possible to detect errors a posteriori by means of test activity. In this paper, we recommend a formal approach coupling the two languages Event-B and PDDL, for automatic planning. Event-B is used for formal modeling by stepwise refinement with mathematical proofs of planning problems. Thus, this paper proposes a refinement strategy allowing to obtain reliable PDDL descriptions from an ultimate Event-B model correct by construction. The ultimate Event-B model, correct by construction which is supposed to be translatable into PDDL, is automatically translated into PDDL using our MDE Event-B2PDDL tool.

Keywords: code generation, event-b, PDDL, refinement strategy, translation rules

Procedia PDF Downloads 178
9567 Programming without Code: An Approach and Environment to Conditions-On-Data Programming

Authors: Philippe Larvet

Abstract:

This paper presents the concept of an object-based programming language where tests (if... then... else) and control structures (while, repeat, for...) disappear and are replaced by conditions on data. According to the object paradigm, by using this concept, data are still embedded inside objects, as variable-value couples, but object methods are expressed into the form of logical propositions (‘conditions on data’ or COD).For instance : variable1 = value1 AND variable2 > value2 => variable3 = value3. Implementing this approach, a central inference engine turns and examines objects one after another, collecting all CODs of each object. CODs are considered as rules in a rule-based system: the left part of each proposition (left side of the ‘=>‘ sign) is the premise and the right part is the conclusion. So, premises are evaluated and conclusions are fired. Conclusions modify the variable-value couples of the object and the engine goes to examine the next object. The paper develops the principles of writing CODs instead of complex algorithms. Through samples, the paper also presents several hints for implementing a simple mechanism able to process this ‘COD language’. The proposed approach can be used within the context of simulation, process control, industrial systems validation, etc. By writing simple and rigorous conditions on data, instead of using classical and long-to-learn languages, engineers and specialists can easily simulate and validate the functioning of complex systems.

Keywords: conditions on data, logical proposition, programming without code, object-oriented programming, system simulation, system validation

Procedia PDF Downloads 209
9566 Mathematical Competence as It Is Defined through Learners' Errors in Arithmetic and Algebra

Authors: Michael Lousis

Abstract:

Mathematical competence is the great aim of every mathematical teaching and learning endeavour. This can be defined as an idealised conceptualisation of the quality of cognition and the ability of implementation in practice of the mathematical subject matter, which is included in the curriculum, and is displayed only through performance of doing mathematics. The present study gives a clear definition of mathematical competence in the domains of Arithmetic and Algebra that stems from the explanation of the learners’ errors in these domains. The learners, whose errors are explained, were Greek and English participants of a large, international, longitudinal, comparative research program entitled the Kassel Project. The participants’ errors emerged as results of their work in dealing with mathematical questions and problems of the tests, which were presented to them. The construction of the tests was such as only the outcomes of the participants’ work was to be encompassed and not their course of thinking, which resulted in these outcomes. The intention was that the tests had to provide undeviating comparable results and simultaneously avoid any probable bias. Any bias could stem from obtaining results by involving so many markers from different countries and cultures, with so many different belief systems concerning the assessment of learners’ course of thinking. In this way the validity of the research was protected. This fact forced the implementation of specific research methods and theoretical prospects to take place in order the participants’ erroneous way of thinking to be disclosed. These were Methodological Pragmatism, Symbolic Interactionism, Philosophy of Mind and the ideas of Computationalism, which were used for deciding and establishing the grounds of the adequacy and legitimacy of the obtained kinds of knowledge through the explanations given by the error analysis. The employment of this methodology and of these theoretical prospects resulted in the definition of the learners’ mathematical competence, which is the thesis of the present study. Thus, learners’ mathematical competence is depending upon three key elements that should be developed in their minds: appropriate representations, appropriate meaning, and appropriate developed schemata. This definition then determined the development of appropriate teaching practices and interventions conducive to the achievement and finally the entailment of mathematical competence.

Keywords: representations, meaning, appropriate developed schemata, computationalism, error analysis, explanations for the probable causes of the errors, Kassel Project, mathematical competence

Procedia PDF Downloads 257
9565 Analysis of Cardiac Health Using Chaotic Theory

Authors: Chandra Mukherjee

Abstract:

The prevalent knowledge of the biological systems is based on the standard scientific perception of natural equilibrium, determination and predictability. Recently, a rethinking of concepts was presented and a new scientific perspective emerged that involves complexity theory with deterministic chaos theory, nonlinear dynamics and theory of fractals. The unpredictability of the chaotic processes probably would change our understanding of diseases and their management. The mathematical definition of chaos is defined by deterministic behavior with irregular patterns that obey mathematical equations which are critically dependent on initial conditions. The chaos theory is the branch of sciences with an interest in nonlinear dynamics, fractals, bifurcations, periodic oscillations and complexity. Recently, the biomedical interest for this scientific field made these mathematical concepts available to medical researchers and practitioners. Any biological network system is considered to have a nominal state, which is recognized as a homeostatic state. In reality, the different physiological systems are not under normal conditions in a stable state of homeostatic balance, but they are in a dynamically stable state with a chaotic behavior and complexity. Biological systems like heart rhythm and brain electrical activity are dynamical systems that can be classified as chaotic systems with sensitive dependence on initial conditions. In biological systems, the state of a disease is characterized by a loss of the complexity and chaotic behavior, and by the presence of pathological periodicity and regulatory behavior. The failure or the collapse of nonlinear dynamics is an indication of disease rather than a characteristic of health.

Keywords: HRV, HRVI, LF, HF, DII

Procedia PDF Downloads 409