Search results for: convex and close-to-convex
89 Periodontal Soft Tissue Sculpturing and Use of Interim Appliance for Rehabilitation of Anterior Edentulousness: Case Report
Authors: Hande Yesil, Seda Aycan Altan, M. Vehbi Bal, Alper Uyar, O. Cumhur Sipahi
Abstract:
Purpose: Fixed partial dentures (FPDs) must fulfill functional requirements such as phonetics, chewing efficiency and esthetics especially in the anterior region. A convex type tissue surface is usually recommended for pontics of FPDs. That pontic design also provides suitable oral hygiene and ease of cleaning. However, high esthetic requirements and correct emergence profile are not always achievable because of the convex shape of adjacent soft tissues. Therefore, the ovate type pontic which fulfills the high esthetic demands of the patients may be a good alternative to the modified ridge lap pontic design. Clinical Report: A female patient referred with the complaint of anterior upper edentulousness. In the oral examination it was determined that teeth 11, 12, 21, 22 were deficient. A thick and convex gingival tissue that may cause aesthetic problems was also observed.. Periodontal augmentation surgery was performed to ensure proper papillary configuration and gingival contour. An interim removable partial denture (IRPD) which applied pressure to operated gingival tissues was fabricated postoperatively. The IRPD was used for 4 weeks and after completion of tissue sculpting, the permanent FPD with an ovate pontic was fabricated and cemented. After a follow-up period of 6 months, not any esthetical and hygienic problem was detected and the patient was satisfied with her prosthesis. Conclusion: It was concluded that shaping of gingival contours with IRPD and use of a FPD with ovate pontic fulfills all esthetic and hygienic requirements.Keywords: interim appliance, ovate pontic, tissue sculpturing, fixed partial denture
Procedia PDF Downloads 28088 A Numerical Study on the Flow in a Pipe with Perforated Plates
Authors: Myeong Hee Jeong, Man Young Kim
Abstract:
The use of perforated plate and tubes is common in applications such as vehicle exhaust silencers, attenuators in air moving ducts and duct linings in jet engines. Also, perforated plate flow conditioners designed to improve flow distribution upstream of an orifice plate flow meter typically have 50–60% free area but these generally employ a non-uniform distribution of holes of several sizes to encourage the formation of a fully developed pipe flow velocity distribution. In this study, therefore, numerical investigations on the flow characteristics with the various perforated plates have been performed and then compared to the case without a perforated plate. Three different models are adopted such as a flat perforated plate, a convex perforated plate in the direction of the inlet, and a convex perforated plate in the direction of the outlet. Simulation results show that the pressure drop with and without perforated plates are similar each other. However, it can be found that that the different shaped perforated plates influence the velocity contour, flow uniformity index, and location of the fully developed fluid flow. These results can be used as a practical guide to the best design of pipe with the perforated plate.Keywords: perforated plate, flow uniformity, pipe turbulent flow, CFD (Computational Fluid Dynamics)
Procedia PDF Downloads 69187 Analysis of Soft and Hard X-Ray Intensities Using Different Shapes of Anodes in a 4kJ Mather Type Plasma Focus Facility
Authors: Mahsa Mahtab, Morteza Habibi
Abstract:
The effect of different anode tip geometries on the intensity of soft and hard x-ray emitted from a 4 kJ plasma focus device is investigated. For this purpose, 5 different anode tips are used. The shapes of the uppermost region of these anodes have been cylindrical-flat, cylindrical-hollow, spherical-convex, cone-flat and cone-hollow. Analyzed data have shown that cone-flat, spherical-convex and cone-hollow anodes significantly increase X-ray intensity respectively in comparison with cylindrical-flat anode; while the cylindrical-hollow tip decreases. Anode radius reduction at its end in conic or spherical anodes enhance SXR by increasing plasma density through collecting a greater mass of gas and more gradual transition phase to form a more stable dense plasma pinch. Also, HXR is enhanced by increasing the energy of electrons colliding with the anode surface through raise of induced electrical field. Finally, the cone-flat anode is introduced to use in cases in which the plasma focus device is used as an X-ray source due to its highest yield of X-ray emissions.Keywords: plasma focus, anode tip, HXR, SXR, pinched plasma
Procedia PDF Downloads 40086 Load Management Using Multiple Sequential Load Shaping Techniques
Authors: Amira M. Attia, Karim H. Youssef, Nabil H. Abbasi
Abstract:
Demand Side Management (DSM) is an essential characteristic of current and future smart grid systems. As one of DSM functions, load management aims to control customers’ total electric consumption and utility’s load factor by using various load shaping techniques. However, applying load shaping techniques such as load shifting, peak clipping, or strategic conservation individually does not provide the desired level of improvement for load factor increment and/or customer’s bill reduction. In this paper, two load shaping techniques will be simulated as constrained optimization problems. The purpose is to reflect the application of combined load shifting and strategic conservation model together at the same time, and the application of combined load shifting and peak clipping model as well. The problem will be formulated and solved by using disciplined convex programming (CVX) based MATLAB® R2013b. Simulation results will be evaluated and compared for studying the most impactful multi-techniques model in improving load curve.Keywords: convex programing, demand side management, load shaping, multiple, building energy optimization
Procedia PDF Downloads 31385 Parameterized Lyapunov Function Based Robust Diagonal Dominance Pre-Compensator Design for Linear Parameter Varying Model
Authors: Xiaobao Han, Huacong Li, Jia Li
Abstract:
For dynamic decoupling of linear parameter varying system, a robust dominance pre-compensator design method is given. The parameterized pre-compensator design problem is converted into optimal problem constrained with parameterized linear matrix inequalities (PLMI); To solve this problem, firstly, this optimization problem is equivalently transformed into a new form with elimination of coupling relationship between parameterized Lyapunov function (PLF) and pre-compensator. Then the problem was reduced to a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a newly constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator was achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation of a turbofan engine PLPV model.Keywords: linear parameter varying (LPV), parameterized Lyapunov function (PLF), linear matrix inequalities (LMI), diagonal dominance pre-compensator
Procedia PDF Downloads 39984 Parental Drinking and Risky Alcohol Related Behaviors: Predicting Binge Drinking Trajectories and Their Influence on Impaired Driving among College Students
Authors: Shiran Bord, Assaf Oshri, Matthew W. Carlson, Sihong Liu
Abstract:
Background: Alcohol-impaired driving (AID) and binge drinking are major health concerns among college students. Although the link between binge drinking and AID is well established, knowledge regarding binge drinking patterns, the factors influencing binge drinking, and the associations between consumption patterns and alcohol-related risk behaviors is lacking. Aims: To examine heterogeneous trajectories of binge drinking during college and tests factors that might predict class membership as well as class membership outcomes. Methods: Data were obtained from a sample of 1,265 college students (Mage = 18.5, SD = .66) as part of the Longitudinal Study of Violence Against Women (N = 1,265; 59.3% female; 69.2% white). Analyses were completed in three stages. First, a growth curve analysis was conducted to identify trajectories of binge drinking over time. Second, growth curve mixture modeling analyses were pursued to assess unobserved growth trajectories of binge drinking without predictors. Lastly, parental drinking variables were added to the model as predictors of class membership, and AID and being a passenger of a drunk driver were added to the model as outcomes. Results: Three binge drinking trajectories were identified: high-convex, medium concave and low-increasing. Parental drinking was associated with being in high-convex and medium-concave classes. Compared to the low-increasing class, the high convex and medium concave classes reported more AID and being a passenger of a drunk driver more frequently. Conclusions: Parental drinking may affect children’s later engagement in AID. Efforts should focus on parents' education regarding the consequences of parental modeling of alcohol consumption.Keywords: alcohol impaired driving, alcohol consumption, binge drinking, college students, parental modeling
Procedia PDF Downloads 28083 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information
Authors: A. Preetha Priyadharshini, S. B. M. Priya
Abstract:
In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.Keywords: imperfect channel state information, outage probability, multiuser- multi input single output, channel state information
Procedia PDF Downloads 81382 Solving Nonconvex Economic Load Dispatch Problem Using Particle Swarm Optimization with Time Varying Acceleration Coefficients
Authors: Alireza Alizadeh, Hossein Ghadimi, Oveis Abedinia, Noradin Ghadimi
Abstract:
A Particle Swarm Optimization with Time Varying Acceleration Coefficients (PSO-TVAC) is proposed to determine optimal economic load dispatch (ELD) problem in this paper. The proposed methodology easily takes care of solving non-convex economic load dispatch problems along with different constraints like transmission losses, dynamic operation constraints and prohibited operating zones. The proposed approach has been implemented on the 3-machines 6-bus, IEEE 5-machines 14-bus, IEEE 6-machines 30-bus systems and 13 thermal units power system. The proposed technique is compared to solve the ELD problem with hybrid approach by using the valve-point effect. The comparison results prove the capability of the proposed method giving significant improvements in the generation cost for the economic load dispatch problem.Keywords: PSO-TVAC, economic load dispatch, non-convex cost function, prohibited operating zone, transmission losses
Procedia PDF Downloads 38781 An Efficient Stud Krill Herd Framework for Solving Non-Convex Economic Dispatch Problem
Authors: Bachir Bentouati, Lakhdar Chaib, Saliha Chettih, Gai-Ge Wang
Abstract:
The problem of economic dispatch (ED) is the basic problem of power framework, its main goal is to find the most favorable generation dispatch to generate each unit, reduce the whole power generation cost, and meet all system limitations. A heuristic algorithm, recently developed called Stud Krill Herd (SKH), has been employed in this paper to treat non-convex ED problems. The proposed KH has been modified using Stud selection and crossover (SSC) operator, to enhance the solution quality and avoid local optima. We are demonstrated SKH effects in two case study systems composed of 13-unit and 40-unit test systems to verify its performance and applicability in solving the ED problems. In the above systems, SKH can successfully obtain the best fuel generator and distribute the load requirements for the online generators. The results showed that the use of the proposed SKH method could reduce the total cost of generation and optimize the fulfillment of the load requirements.Keywords: stud krill herd, economic dispatch, crossover, stud selection, valve-point effect
Procedia PDF Downloads 19880 Optimal Economic Restructuring Aimed at an Optimal Increase in GDP Constrained by a Decrease in Energy Consumption and CO2 Emissions
Authors: Alexander Vaninsky
Abstract:
The objective of this paper is finding the way of economic restructuring - that is, change in the shares of sectoral gross outputs - resulting in the maximum possible increase in the gross domestic product (GDP) combined with decreases in energy consumption and CO2 emissions. It uses an input-output model for the GDP and factorial models for the energy consumption and CO2 emissions to determine the projection of the gradient of GDP, and the antigradients of the energy consumption and CO2 emissions, respectively, on a subspace formed by the structure-related variables. Since the gradient (antigradient) provides a direction of the steepest increase (decrease) of the objective function, and their projections retain this property for the functions' limitation to the subspace, each of the three directional vectors solves a particular problem of optimal structural change. In the next step, a type of factor analysis is applied to find a convex combination of the projected gradient and antigradients having maximal possible positive correlation with each of the three. This convex combination provides the desired direction of the structural change. The national economy of the United States is used as an example of applications.Keywords: economic restructuring, input-output analysis, divisia index, factorial decomposition, E3 models
Procedia PDF Downloads 31479 Numerical Investigations on the Coanda Effect
Authors: Florin Frunzulica, Alexandru Dumitrache, Octavian Preotu
Abstract:
The Coanda effect consists of the tendency of a jet to remain attached to a sufficiently long/large convex surface. Flows deflected by a curved surface have caused great interest during last fifty years a major interest in the study of this phenomenon is caused by the possibility of using this effect to aircraft with short take-off and landing, for thrust vectoring. It is also used in applications involving mixing two of more fluids, noise attenuation, ventilation, etc. The paper proposes the numerical study of an aerodynamic configuration that can passively amplify the Coanda effect. On a wing flaps with predetermined configuration, a channel is applied between two particular zones, a low-pressure one and a high-pressure another one, respectively. The secondary flow through this channel yields a gap between the jet and the convex surface, maintaining the jet attached on a longer distance. The section altering-based active control of the secondary flow through the channel controls the attachment of the jet to the surface and automatically controls the deviation angle of the jet. The numerical simulations have been performed in Ansys Fluent for a series of wing flaps-channel configurations with varying jet velocity. The numerical results are in good agreement with experimental results.Keywords: blowing jet, CFD, Coanda effect, circulation control
Procedia PDF Downloads 34678 Investigation of External Pressure Coefficients on Large Antenna Parabolic Reflector Using Computational Fluid Dynamics
Authors: Varun K, Pramod B. Balareddy
Abstract:
Estimation of wind forces plays a significant role in the in the design of large antenna parabolic reflectors. Reflector surface accuracies are very sensitive to the gain of the antenna system at higher frequencies. Hence accurate estimation of wind forces becomes important, which is primary input for design and analysis of the reflector system. In the present work, numerical simulation of wind flow using Computational Fluid Dynamics (CFD) software is used to investigate the external pressure coefficients. An extensive comparative study has been made between the CFD results and the published wind tunnel data for different wind angle of attacks (α) acting over concave to convex surfaces respectively. Flow simulations using CFD are carried out to estimate the coefficients of Drag, Lift and Moment for the parabolic reflector. Coefficients of pressures (Cp) over the front and the rear face of the reflector are extracted over surface of the reflector to study the net pressure variations. These resultant pressure variations are compared with the published wind tunnel data for different angle of attacks. It was observed from the CFD simulations, both convex and concave face of reflector system experience a band of pressure variations for the positive and negative angle of attacks respectively. In the published wind tunnel data, Pressure variations over convex surfaces are assumed to be uniform and vice versa. Chordwise and spanwise pressure variations were calculated and compared with the published experimental data. In the present work, it was observed that the maximum pressure coefficients for α ranging from +30° to -90° and α=+90° was lower. For α ranging from +45° to +75°, maximum pressure coefficients were higher as compared to wind tunnel data. This variation is due to non-uniform pressure distribution observed over front and back faces of reflector. Variations in Cd, Cl and Cm over α=+90° to α=-90° was in close resemblance with the experimental data.Keywords: angle of attack, drag coefficient, lift coefficient, pressure coefficient
Procedia PDF Downloads 25777 Development of Algorithms for Solving and Analyzing Special Problems Transports Type
Authors: Dmitri Terzi
Abstract:
The article presents the results of an algorithmic study of a special optimization problem of the transport type (traveling salesman problem): 1) To solve the problem, a new natural algorithm has been developed based on the decomposition of the initial data into convex hulls, which has a number of advantages; it is applicable for a fairly large dimension, does not require a large amount of memory, and has fairly good performance. The relevance of the algorithm lies in the fact that, in practice, programs for problems with the number of traversal points of no more than twenty are widely used. For large-scale problems, the availability of algorithms and programs of this kind is difficult. The proposed algorithm is natural because the optimal solution found by the exact algorithm is not always feasible due to the presence of many other factors that may require some additional restrictions. 2) Another inverse problem solved here is to describe a class of traveling salesman problems that have a predetermined optimal solution. The constructed algorithm 2 allows us to characterize the structure of traveling salesman problems, as well as construct test problems to evaluate the effectiveness of algorithms and other purposes. 3) The appendix presents a software implementation of Algorithm 1 (in MATLAB), which can be used to solve practical problems, as well as in the educational process on operations research and optimization methods.Keywords: traveling salesman problem, solution construction algorithm, convex hulls, optimality verification
Procedia PDF Downloads 7376 Fill Rate Window as a Criterion for Spares Allocation
Authors: Michael Dreyfuss, Yahel Giat
Abstract:
Limited battery range and long recharging times are the greatest obstacles to the successful adoption of electric cars. One of the suggestions to overcome these problems is that carmakers retain ownership of batteries and provide battery swapping service so that customers exchange their depleted batteries for recharged batteries. Motivated by this example, we consider the problem of optimal spares allocation in an exchangeable-item, multi-location repair system. We generalize the standard service measures of fill rate and average waiting time to reflect the fact that customers penalize the service provider only if they have to wait more than a ‘tolerable’ time window. These measures are denoted as the window fill rate and the truncated waiting time, respectively. We find that the truncated waiting time is convex and therefore a greedy algorithm solves the spares allocation problem efficiently. We show that the window fill rate is generally S-shaped and describe an efficient algorithm to find a near-optimal solution and detail a priori and a posteriori upper bounds to the distance from optimum. The theory is complemented with a large scale numerical example demonstrating the spare battery allocation in battery swapping stations.Keywords: convex-concave optimization, exchangeable item, M/G/infinity, multiple location, repair system, spares allocation, window fill rate
Procedia PDF Downloads 49375 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction
Authors: Bastien Batardière, Joon Kwon
Abstract:
For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.Keywords: convex optimization, variance reduction, adaptive algorithms, loopless
Procedia PDF Downloads 7074 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation
Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim
Abstract:
In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement
Procedia PDF Downloads 11773 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 11472 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)
Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula
Abstract:
This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.Keywords: MINLP, mixed-integer non-linear programming, optimization, structures
Procedia PDF Downloads 4671 Network Analysis and Sex Prediction based on a full Human Brain Connectome
Authors: Oleg Vlasovets, Fabian Schaipp, Christian L. Mueller
Abstract:
we conduct a network analysis and predict the sex of 1000 participants based on ”connectome” - pairwise Pearson’s correlation across 436 brain parcels. We solve the non-smooth convex optimization problem, known under the name of Graphical Lasso, where the solution includes a low-rank component. With this solution and machine learning model for a sex prediction, we explain the brain parcels-sex connectivity patterns.Keywords: network analysis, neuroscience, machine learning, optimization
Procedia PDF Downloads 14770 Existence Result of Third Order Functional Random Integro-Differential Inclusion
Authors: D. S. Palimkar
Abstract:
The FRIGDI (functional random integrodifferential inclusion) seems to be new and includes several known random differential inclusions already studied in the literature as special cases have been discussed in the literature for various aspects of the solutions. In this paper, we prove the existence result for FIGDI under the non-convex case of multi-valued function involved in it.Using random fixed point theorem of B. C. Dhage and caratheodory condition. This result is new to the theory of differential inclusion.Keywords: caratheodory condition, random differential inclusion, random solution, integro-differential inclusion
Procedia PDF Downloads 46669 Some Results on Generalized Janowski Type Functions
Authors: Fuad Al Sarari
Abstract:
The purpose of the present paper is to study subclasses of analytic functions which generalize the classes of Janowski functions introduced by Polatoglu. We study certain convolution conditions. This leads to a study of the sufficient condition and the neighborhood results related to the functions in the class S (T; H; F ): and a study of new subclasses of analytic functions that are defined using notions of the generalized Janowski classes and -symmetrical functions. In the quotient of analytical representations of starlikeness and convexity with respect to symmetric points, certain inherent properties are pointed out.Keywords: convolution conditions, subordination, Janowski functions, starlike functions, convex functions
Procedia PDF Downloads 6768 On the Topological Entropy of Nonlinear Dynamical Systems
Authors: Graziano Chesi
Abstract:
The topological entropy plays a key role in linear dynamical systems, allowing one to establish the existence of stabilizing feedback controllers for linear systems in the presence of communications constraints. This paper addresses the determination of a robust value of the topological entropy in nonlinear dynamical systems, specifically the largest value of the topological entropy over all linearized models in a region of interest of the state space. It is shown that a sufficient condition for establishing upper bounds of the sought robust value of the topological entropy can be given in terms of a semidefinite program (SDP), which belongs to the class of convex optimization problems.Keywords: non-linear system, communication constraint, topological entropy
Procedia PDF Downloads 32067 Multidimensional Integral and Discrete Opial–Type Inequalities
Authors: Maja Andrić, Josip Pečarić
Abstract:
Over the last five decades, an enormous amount of work has been done on Opial’s integral inequality, dealing with new proofs, various generalizations, extensions and discrete analogs. The Opial inequality is recognized as a fundamental result in the analysis of qualitative properties of solution of differential equations. We use submultiplicative convex functions, appropriate representations of functions and inequalities involving means to obtain generalizations and extensions of certain known multidimensional integral and discrete Opial-type inequalities.Keywords: Opial's inequality, Jensen's inequality, integral inequality, discrete inequality
Procedia PDF Downloads 43966 Spatial Structure of First-Order Voronoi for the Future of Roundabout Cairo Since 1867
Authors: Ali Essam El Shazly
Abstract:
The Haussmannization plan of Cairo in 1867 formed a regular network of roundabout spaces, though deteriorated at present. The method of identifying the spatial structure of roundabout Cairo for conservation matches the voronoi diagram with the space syntax through their geometrical property of spatial convexity. In this initiative, the primary convex hull of first-order voronoi adopts the integral and control measurements of space syntax on Cairo’s roundabout generators. The functional essence of royal palaces optimizes the roundabout structure in terms of spatial measurements and the symbolic voronoi projection of 'Tahrir Roundabout' over the Giza Nile and Pyramids. Some roundabouts of major public and commercial landmarks surround the pole of 'Ezbekia Garden' with a higher control than integral measurements, which filter the new spatial structure from the adjacent traditional town. Nevertheless, the least integral and control measures correspond to the voronoi contents of pollutant workshops and the plateau of old Cairo Citadel with the visual compensation of new royal landmarks on top. Meanwhile, the extended suburbs of infinite voronoi polygons arrange high control generators of chateaux housing in 'garden city' environs. The point pattern of roundabouts determines the geometrical characteristics of voronoi polygons. The measured lengths of voronoi edges alternate between the zoned short range at the new poles of Cairo and the distributed structure of longer range. Nevertheless, the shortest range of generator-vertex geometry concentrates at 'Ezbekia Garden' where the crossways of vast Cairo intersect, which maximizes the variety of choice at different spatial resolutions. However, the symbolic 'Hippodrome' which is the largest public landmark forms exclusive geometrical measurements, while structuring a most integrative roundabout to parallel the royal syntax. Overview of the symbolic convex hull of voronoi with space syntax interconnects Parisian Cairo with the spatial chronology of scattered monuments to conceive one universal Cairo structure. Accordingly, the approached methodology of 'voronoi-syntax' prospects the future conservation of roundabout Cairo at the inferred city-level concept.Keywords: roundabout Cairo, first-order Voronoi, space syntax, spatial structure
Procedia PDF Downloads 50165 Application of Regularized Low-Rank Matrix Factorization in Personalized Targeting
Authors: Kourosh Modarresi
Abstract:
The Netflix problem has brought the topic of “Recommendation Systems” into the mainstream of computer science, mathematics, and statistics. Though much progress has been made, the available algorithms do not obtain satisfactory results. The success of these algorithms is rarely above 5%. This work is based on the belief that the main challenge is to come up with “scalable personalization” models. This paper uses an adaptive regularization of inverse singular value decomposition (SVD) that applies adaptive penalization on the singular vectors. The results show far better matching for recommender systems when compared to the ones from the state of the art models in the industry.Keywords: convex optimization, LASSO, regression, recommender systems, singular value decomposition, low rank approximation
Procedia PDF Downloads 45564 Labyrinth Fractal on a Convex Quadrilateral
Authors: Harsha Gopalakrishnan, Srijanani Anurag Prasad
Abstract:
Quadrilateral labyrinth fractals are a new type of fractals that are introduced in this paper. They belong to a unique class of fractals on any plane quadrilateral. The previously researched labyrinth fractals on the unit square and triangle inspire this form of fractal. This work describes how to construct a quadrilateral labyrinth fractal and looks at the circumstances in which it can be understood as the attractor of an iterated function system. Furthermore, some of its topological properties and the Hausdorff and box-counting dimensions of the quadrilateral labyrinth fractals are studied.Keywords: fractals, labyrinth fractals, dendrites, iterated function system, Haus-Dorff dimension, box-counting dimension, non-self similar, non-self affine, connected, path connected
Procedia PDF Downloads 7563 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions
Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini
Abstract:
This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing
Procedia PDF Downloads 14662 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring
Authors: Younghoon Kim, Seoung Bum Kim
Abstract:
One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.Keywords: control chart, mixed integer programming, one-class classification, support vector data description
Procedia PDF Downloads 17461 The Possibility of Solving a 3x3 Rubik’s Cube under 3 Seconds
Authors: Chung To Kong, Siu Ming Yiu
Abstract:
Rubik's cube was invented in 1974. Since then, speedcubers all over the world try their best to break the world record again and again. The newest record is 3.47 seconds. There are many factors that affect the timing, including turns per second (tps), algorithm, finger trick, hardware of the cube. In this paper, the lower bound of the cube solving time will be discussed using convex optimization. Extended analysis of the world records will be used to understand how to improve the timing. With the understanding of each part of the solving step, the paper suggests a list of speed improvement techniques. Based on the analysis of the world record, there is a high possibility that the 3 seconds mark will be broken soon.Keywords: Rubik's Cube, speed, finger trick, optimization
Procedia PDF Downloads 20660 Study of Fast Etching of Silicon for the Fabrication of Bulk Micromachined MEMS Structures
Authors: V. Swarnalatha, A. V. Narasimha Rao, P. Pal
Abstract:
The present research reports the investigation of fast etching of silicon for the fabrication of microelectromechanical systems (MEMS) structures using silicon wet bulk micromachining. Low concentration tetramethyl-ammonium hydroxide (TMAH) and hydroxylamine (NH2OH) are used as main etchant and additive, respectively. The concentration of NH2OH is varied to optimize the composition to achieve best etching characteristics such as high etch rate, significantly high undercutting at convex corner for the fast release of the microstructures from the substrate, and improved etched surface morphology. These etching characteristics are studied on Si{100} and Si{110} wafers as they are most widely used in the fabrication of MEMS structures as wells diode, transistors and integrated circuits.Keywords: KOH, MEMS, micromachining, silicon, TMAH, wet anisotropic etching
Procedia PDF Downloads 201