Search results for: hydrostatic and hydrodynamic optimization
2415 Optimal Investment and Consumption Decision for an Investor with Ornstein-Uhlenbeck Stochastic Interest Rate Model through Utility Maximization
Authors: Silas A. Ihedioha
Abstract:
In this work; it is considered that an investor’s portfolio is comprised of two assets; a risky stock which price process is driven by the geometric Brownian motion and a risk-free asset with Ornstein-Uhlenbeck Stochastic interest rate of return, where consumption, taxes, transaction costs and dividends are involved. This paper aimed at the optimization of the investor’s expected utility of consumption and terminal return on his investment at the terminal time having power utility preference. Using dynamic optimization procedure of maximum principle, a second order nonlinear partial differential equation (PDE) (the Hamilton-Jacobi-Bellman equation HJB) was obtained from which an ordinary differential equation (ODE) obtained via elimination of variables. The solution to the ODE gave the closed form solution of the investor’s problem. It was found the optimal investment in the risky asset is horizon dependent and a ratio of the total amount available for investment and the relative risk aversion coefficient.Keywords: optimal, investment, Ornstein-Uhlenbeck, utility maximization, stochastic interest rate, maximum principle
Procedia PDF Downloads 2252414 A Study on Improvement of the Torque Ripple and Demagnetization Characteristics of a PMSM
Authors: Yong Min You
Abstract:
The study on the torque ripple of Permanent Magnet Synchronous Motors (PMSMs) has been rapidly progressed, which effects on the noise and vibration of the electric vehicle. There are several ways to reduce torque ripple, which are the increase in the number of slots and poles, the notch of the rotor and stator teeth, and the skew of the rotor and stator. However, the conventional methods have the disadvantage in terms of material cost and productivity. The demagnetization characteristic of PMSMs must be attained for electric vehicle application. Due to rare earth supply issue, the demand for Dy-free permanent magnet has been increasing, which can be applied to PMSMs for the electric vehicle. Dy-free permanent magnet has lower the coercivity; the demagnetization characteristic has become more significant. To improve the torque ripple as well as the demagnetization characteristics, which are significant parameters for electric vehicle application, an unequal air-gap model is proposed for a PMSM. A shape optimization is performed to optimize the design variables of an unequal air-gap model. Optimal design variables are the shape of an unequal air-gap and the angle between V-shape magnets. An optimization process is performed by Latin Hypercube Sampling (LHS), Kriging Method, and Genetic Algorithm (GA). Finite element analysis (FEA) is also utilized to analyze the torque and demagnetization characteristics. The torque ripple and the demagnetization temperature of the initial model of 45kW PMSM with unequal air-gap are 10 % and 146.8 degrees, respectively, which are reaching a critical level for electric vehicle application. Therefore, the unequal air-gap model is proposed, and then an optimization process is conducted. Compared to the initial model, the torque ripple of the optimized unequal air-gap model was reduced by 7.7 %. In addition, the demagnetization temperature of the optimized model was also increased by 1.8 % while maintaining the efficiency. From these results, a shape optimized unequal air-gap PMSM has shown the usefulness of an improvement in the torque ripple and demagnetization temperature for the electric vehicle.Keywords: permanent magnet synchronous motor, optimal design, finite element method, torque ripple
Procedia PDF Downloads 2742413 Optimization of Oxygen Plant Parameters Simulating with MATLAB
Authors: B. J. Sonani, J. K. Ratnadhariya, Srinivas Palanki
Abstract:
Cryogenic engineering is the fast growing branch of the modern technology. There are various applications of the cryogenic engineering such as liquefaction in gas industries, metal industries, medical science, space technology, and transportation. The low-temperature technology developed superconducting materials which lead to reduce the friction and wear in various components of the systems. The liquid oxygen, hydrogen and helium play vital role in space application. The liquefaction process is produced very low temperature liquid for various application in research and modern application. The air liquefaction system for oxygen plants in gas industries is based on the Claude cycle. The effect of process parameters on the overall system is difficult to be analysed by manual calculations, and this provides the motivation to use process simulators for understanding the steady state and dynamic behaviour of such systems. The parametric study of this system via MATLAB simulations provide useful guidelines for preliminary design of air liquefaction system based on the Claude cycle. Every organization is always trying for reduce the cost and using the optimum performance of the plant for the staying in the competitive market.Keywords: cryogenic, liquefaction, low -temperature, oxygen, claude cycle, optimization, MATLAB
Procedia PDF Downloads 3222412 Neighborhood Graph-Optimized Preserving Discriminant Analysis for Image Feature Extraction
Authors: Xiaoheng Tan, Xianfang Li, Tan Guo, Yuchuan Liu, Zhijun Yang, Hongye Li, Kai Fu, Yufang Wu, Heling Gong
Abstract:
The image data collected in reality often have high dimensions, and it contains noise and redundant information. Therefore, it is necessary to extract the compact feature expression of the original perceived image. In this process, effective use of prior knowledge such as data structure distribution and sample label is the key to enhance image feature discrimination and robustness. Based on the above considerations, this paper proposes a local preserving discriminant feature learning model based on graph optimization. The model has the following characteristics: (1) Locality preserving constraint can effectively excavate and preserve the local structural relationship between data. (2) The flexibility of graph learning can be improved by constructing a new local geometric structure graph using label information and the nearest neighbor threshold. (3) The L₂,₁ norm is used to redefine LDA, and the diagonal matrix is introduced as the scale factor of LDA, and the samples are selected, which improves the robustness of feature learning. The validity and robustness of the proposed algorithm are verified by experiments in two public image datasets.Keywords: feature extraction, graph optimization local preserving projection, linear discriminant analysis, L₂, ₁ norm
Procedia PDF Downloads 1492411 Cross-Linked Amyloglucosidase Aggregates: A New Carrier Free Immobilization Strategy for Continuous Saccharification of Starch
Authors: Sidra Pervez, Afsheen Aman, Shah Ali Ul Qader
Abstract:
The importance of attaining an optimum performance of an enzyme is often a question of devising an effective method for its immobilization. Cross-linked enzyme aggregate (CLEAs) is a new approach for immobilization of enzymes using carrier free strategy. This method is exquisitely simple (involving precipitation of the enzyme from aqueous buffer followed by cross-linking of the resulting physical aggregates of enzyme molecules) and amenable to rapid optimization. Among many industrial enzymes, amyloglucosidase is an important amylolytic enzyme that hydrolyzes alpha (1→4) and alpha (1→6) glycosidic bonds in starch molecule and produce glucose as a sole end product. Glucose liberated by amyloglucosidase can be used for the production of ethanol and glucose syrups. Besides this amyloglucosidase can be widely used in various food and pharmaceuticals industries. For production of amyloglucosidase on commercial scale, filamentous fungi of genera Aspergillus are mostly used because they secrete large amount of enzymes extracellularly. The current investigation was based on isolation and identification of filamentous fungi from genus Aspergillus for the production of amyloglucosidase in submerged fermentation and optimization of cultivation parameters for starch saccharification. Natural isolates were identified as Aspergillus niger KIBGE-IB36, Aspergillus fumigatus KIBGE-IB33, Aspergillus flavus KIBGE-IB34 and Aspergillus terreus KIBGE-IB35 on taxonomical basis and 18S rDNA analysis and their sequence were submitted to GenBank. Among them, Aspergillus fumigatus KIBGE-IB33 was selected on the basis of maximum enzyme production. After optimization of fermentation conditions enzyme was immobilized on CLEA. Different parameters were optimized for maximum immobilization of amyloglucosidase. Data of enzyme stability (thermal and Storage) and reusability suggested the applicability of immobilized amyloglucosidase for continuous saccharification of starch in industrial processes.Keywords: aspergillus, immobilization, industrial processes, starch saccharification
Procedia PDF Downloads 4962410 Identifying the Factors affecting on the Success of Energy Usage Saving in Municipality of Tehran
Authors: Rojin Bana Derakhshan, Abbas Toloie
Abstract:
For the purpose of optimizing and developing energy efficiency in building, it is required to recognize key elements of success in optimization of energy consumption before performing any actions. Surveying Principal Components is one of the most valuable result of Linear Algebra because the simple and non-parametric methods are become confusing. So that energy management system implemented according to energy management system international standard ISO50001:2011 and all energy parameters in building to be measured through performing energy auditing. In this essay by simulating used of data mining, the key impressive elements on energy saving in buildings to be determined. This approach is based on data mining statistical techniques using feature selection method and fuzzy logic and convert data from massive to compressed type and used to increase the selected feature. On the other side, influence portion and amount of each energy consumption elements in energy dissipation in percent are recognized as separated norm while using obtained results from energy auditing and after measurement of all energy consuming parameters and identified variables. Accordingly, energy saving solution divided into 3 categories, low, medium and high expense solutions.Keywords: energy saving, key elements of success, optimization of energy consumption, data mining
Procedia PDF Downloads 4682409 Steepest Descent Method with New Step Sizes
Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman
Abstract:
Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.Keywords: steepest descent, line search, iteration, running time, unconstrained optimization, convergence
Procedia PDF Downloads 5402408 Scheduling Residential Daily Energy Consumption Using Bi-criteria Optimization Methods
Authors: Li-hsing Shih, Tzu-hsun Yen
Abstract:
Because of the long-term commitment to net zero carbon emission, utility companies include more renewable energy supply, which generates electricity with time and weather restrictions. This leads to time-of-use electricity pricing to reflect the actual cost of energy supply. From an end-user point of view, better residential energy management is needed to incorporate the time-of-use prices and assist end users in scheduling their daily use of electricity. This study uses bi-criteria optimization methods to schedule daily energy consumption by minimizing the electricity cost and maximizing the comfort of end users. Different from most previous research, this study schedules users’ activities rather than household appliances to have better measures of users’ comfort/satisfaction. The relation between each activity and the use of different appliances could be defined by users. The comfort level is at the highest when the time and duration of an activity completely meet the user’s expectation, and the comfort level decreases when the time and duration do not meet expectations. A questionnaire survey was conducted to collect data for establishing regression models that describe users’ comfort levels when the execution time and duration of activities are different from user expectations. Six regression models representing the comfort levels for six types of activities were established using the responses to the questionnaire survey. A computer program is developed to evaluate electricity cost and the comfort level for each feasible schedule and then find the non-dominated schedules. The Epsilon constraint method is used to find the optimal schedule out of the non-dominated schedules. A hypothetical case is presented to demonstrate the effectiveness of the proposed approach and the computer program. Using the program, users can obtain the optimal schedule of daily energy consumption by inputting the intended time and duration of activities and the given time-of-use electricity prices.Keywords: bi-criteria optimization, energy consumption, time-of-use price, scheduling
Procedia PDF Downloads 592407 Optimization of Reliability and Communicability of a Random Two-Dimensional Point Patterns Using Delaunay Triangulation
Authors: Sopheak Sorn, Kwok Yip Szeto
Abstract:
Reliability is one of the important measures of how well the system meets its design objective, and mathematically is the probability that a complex system will perform satisfactorily. When the system is described by a network of N components (nodes) and their L connection (links), the reliability of the system becomes a network design problem that is an NP-hard combinatorial optimization problem. In this paper, we address the network design problem for a random point set’s pattern in two dimensions. We make use of a Voronoi construction with each cell containing exactly one point in the point pattern and compute the reliability of the Voronoi’s dual, i.e. the Delaunay graph. We further investigate the communicability of the Delaunay network. We find that there is a positive correlation and a negative correlation between the homogeneity of a Delaunay's degree distribution with its reliability and its communicability respectively. Based on the correlations, we alter the communicability and the reliability by performing random edge flips, which preserve the number of links and nodes in the network but can increase the communicability in a Delaunay network at the cost of its reliability. This transformation is later used to optimize a Delaunay network with the optimum geometric mean between communicability and reliability. We also discuss the importance of the edge flips in the evolution of real soap froth in two dimensions.Keywords: Communicability, Delaunay triangulation, Edge Flip, Reliability, Two dimensional network, Voronio
Procedia PDF Downloads 4192406 Modeling Studies on the Elevated Temperatures Formability of Tube Ends Using RSM
Authors: M. J. Davidson, N. Selvaraj, L. Venugopal
Abstract:
The elevated temperature forming studies on the expansion of thin walled tubes have been studied in the present work. The influence of process parameters namely the die angle, the die ratio and the operating temperatures on the expansion of tube ends at elevated temperatures is carried out. The range of operating parameters have been identified by perfoming extensive simulation studies. The hot forming parameters have been evaluated for AA2014 alloy for performing the simulation studies. Experimental matrix has been developed from the feasible range got from the simulation results. The design of experiments is used for the optimization of process parameters. Response Surface Method’s (RSM) and Box-Behenken design (BBD) is used for developing the mathematical model for expansion. Analysis of variance (ANOVA) is used to analyze the influence of process parameters on the expansion of tube ends. The effect of various process combinations of expansion are analyzed through graphical representations. The developed model is found to be appropriate as the coefficient of determination value is very high and is equal to 0.9726. The predicted values are found to coincide well with the experimental results, within acceptable error limits.Keywords: expansion, optimization, Response Surface Method (RSM), ANOVA, bbd, residuals, regression, tube
Procedia PDF Downloads 5092405 Flood Early Warning and Management System
Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare
Abstract:
The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.Keywords: flood, modeling, HPC, FOSS
Procedia PDF Downloads 892404 Optimization of Strategies and Models Review for Optimal Technologies-Based on Fuzzy Schemes for Green Architecture
Authors: Ghada Elshafei, A. Elazim Negm
Abstract:
Recently, Green architecture becomes a significant way to a sustainable future. Green building designs involve finding the balance between comfortable homebuilding and sustainable environment. Moreover, the utilization of the new technologies such as artificial intelligence techniques are used to complement current practices in creating greener structures to keep the built environment more sustainable. The most common objectives are green buildings should be designed to minimize the overall impact of the built environment on ecosystems in general and particularly on human health and on the natural environment. This will lead to protecting occupant health, improving employee productivity, reducing pollution and sustaining the environmental. In green building design, multiple parameters which may be interrelated, contradicting, vague and of qualitative/quantitative nature are broaden to use. This paper presents a comprehensive critical state of art review of current practices based on fuzzy and its combination techniques. Also, presented how green architecture/building can be improved using the technologies that been used for analysis to seek optimal green solutions strategies and models to assist in making the best possible decision out of different alternatives.Keywords: green architecture/building, technologies, optimization, strategies, fuzzy techniques, models
Procedia PDF Downloads 4752403 Structural Parameter-Induced Focusing Pattern Transformation in CEA Microfluidic Device
Authors: Xin Shi, Wei Tan, Guorui Zhu
Abstract:
The contraction-expansion array (CEA) microfluidic device is widely used for particle focusing and particle separation. Without the introduction of external fields, it can manipulate particles using hydrodynamic forces, including inertial lift forces and Dean drag forces. The focusing pattern of the particles in a CEA channel can be affected by the structural parameter, block ratio, and flow streamlines. Here, two typical focusing patterns with five different structural parameters were investigated, and the force mechanism was analyzed. We present nine CEA channels with different aspect ratios based on the process of changing the particle equilibrium positions. The results show that 10-15 μm particles have the potential to generate a side focusing line as the structural parameter (¬R𝓌) increases. For a determined channel structure and target particles, when the Reynolds number (Rₑ) exceeds the critical value, the focusing pattern will transform from a single pattern to a double pattern. The parameter α/R𝓌 can be used to calculate the critical Reynolds number for the focusing pattern transformation. The results can provide guidance for microchannel design and biomedical analysis.Keywords: microfluidic, inertial focusing, particle separation, Dean flow
Procedia PDF Downloads 792402 Optimization of Assay Parameters of L-Glutaminase from Bacillus cereus MTCC1305 Using Artificial Neural Network
Authors: P. Singh, R. M. Banik
Abstract:
Artificial neural network (ANN) was employed to optimize assay parameters viz., time, temperature, pH of reaction mixture, enzyme volume and substrate concentration of L-glutaminase from Bacillus cereus MTCC 1305. ANN model showed high value of coefficient of determination (0.9999), low value of root mean square error (0.6697) and low value of absolute average deviation. A multilayer perceptron neural network trained with an error back-propagation algorithm was incorporated for developing a predictive model and its topology was obtained as 5-3-1 after applying Levenberg Marquardt (LM) training algorithm. The predicted activity of L-glutaminase was obtained as 633.7349 U/l by considering optimum assay parameters, viz., pH of reaction mixture (7.5), reaction time (20 minutes), incubation temperature (35˚C), substrate concentration (40mM), and enzyme volume (0.5ml). The predicted data was verified by running experiment at simulated optimum assay condition and activity was obtained as 634.00 U/l. The application of ANN model for optimization of assay conditions improved the activity of L-glutaminase by 1.499 fold.Keywords: Bacillus cereus, L-glutaminase, assay parameters, artificial neural network
Procedia PDF Downloads 4292401 Multi-Objective Electric Vehicle Charge Coordination for Economic Network Management under Uncertainty
Authors: Ridoy Das, Myriam Neaimeh, Yue Wang, Ghanim Putrus
Abstract:
Electric vehicles are a popular transportation medium renowned for potential environmental benefits. However, large and uncontrolled charging volumes can impact distribution networks negatively. Smart charging is widely recognized as an efficient solution to achieve both improved renewable energy integration and grid relief. Nevertheless, different decision-makers may pursue diverse and conflicting objectives. In this context, this paper proposes a multi-objective optimization framework to control electric vehicle charging to achieve both energy cost reduction and peak shaving. A weighted-sum method is developed due to its intuitiveness and efficiency. Monte Carlo simulations are implemented to investigate the impact of uncertain electric vehicle driving patterns and provide decision-makers with a robust outcome in terms of prospective cost and network loading. The results demonstrate that there is a conflict between energy cost efficiency and peak shaving, with the decision-makers needing to make a collaborative decision.Keywords: electric vehicles, multi-objective optimization, uncertainty, mixed integer linear programming
Procedia PDF Downloads 1782400 Machine Learning Assisted Performance Optimization in Memory Tiering
Authors: Derssie Mebratu
Abstract:
As a large variety of micro services, web services, social graphic applications, and media applications are continuously developed, it is substantially vital to design and build a reliable, efficient, and faster memory tiering system. Despite limited design, implementation, and deployment in the last few years, several techniques are currently developed to improve a memory tiering system in a cloud. Some of these techniques are to develop an optimal scanning frequency; improve and track pages movement; identify pages that recently accessed; store pages across each tiering, and then identify pages as a hot, warm, and cold so that hot pages can store in the first tiering Dynamic Random Access Memory (DRAM) and warm pages store in the second tiering Compute Express Link(CXL) and cold pages store in the third tiering Non-Volatile Memory (NVM). Apart from the current proposal and implementation, we also develop a new technique based on a machine learning algorithm in that the throughput produced 25% improved performance compared to the performance produced by the baseline as well as the latency produced 95% improved performance compared to the performance produced by the baseline.Keywords: machine learning, bayesian optimization, memory tiering, CXL, DRAM
Procedia PDF Downloads 962399 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard
Procedia PDF Downloads 5252398 Long-Term Results of Coronary Bifurcation Stenting with Drug Eluting Stents
Authors: Piotr Muzyk, Beata Morawiec, Mariusz Opara, Andrzej Tomasik, Brygida Przywara-Chowaniec, Wojciech Jachec, Ewa Nowalany-Kozielska, Damian Kawecki
Abstract:
Background: Coronary bifurcation is one of the most complex lesion in patients with coronary ar-tery disease. Provisional T-stenting is currently one of the recommended techniques. The aim was to assess optimal methods of treatment in the era of drug-eluting stents (DES). Methods: The regis-try consisted of data from 1916 patients treated with coronary percutaneous interventions (PCI) using either first- or second-generation DES. Patients with bifurcation lesion entered the analysis. Major adverse cardiac and cardiovascular events (MACCE) were assessed at one year of follow-up and comprised of death, acute myocardial infarction (AMI), repeated PCI (re-PCI) of target ves-sel and stroke. Results: Of 1916 registry patients, 204 patients (11%) were diagnosed with bifurcation lesion >50% and entered the analysis. The most commonly used technique was provi-sional T-stenting (141 patients, 69%). Optimization with kissing-balloons technique was performed in 45 patients (22%). In 59 patients (29%) second-generation DES was implanted, while in 112 pa-tients (55%), first-generation DES was used. In 33 patients (16%) both types of DES were used. The procedure success rate (TIMI 3 flow) was achieved in 98% of patients. In one-year follow-up, there were 39 MACCE (19%) (9 deaths, 17 AMI, 16 re-PCI and 5 strokes). Provisional T-stenting resulted in similar rate of MACCE to other techniques (16% vs. 5%, p=0.27) and similar occurrence of re-PCI (6% vs. 2%, p=0.78). The results of post-PCI kissing-balloon technique gave equal out-comes with 3% vs. 16% of MACCE in patients in whom no optimization technique was used (p=0.39). The type of implanted DES (second- vs. first-generation) had no influence on MACCE (4% vs 14%, respectively, p=0.12) and re-PCI (1.7% vs. 51% patients, respectively, p=0.28). Con-clusions: The treatment of bifurcation lesions with PCI represent high-risk procedures with high rate of MACCE. Stenting technique, optimization of PCI and the generation of implanted stent should be personalized for each case to balance risk of the procedure. In this setting, the operator experience might be the factor of better outcome, which should be further investigated.Keywords: coronary bifurcation, drug eluting stents, long-term follow-up, percutaneous coronary interventions
Procedia PDF Downloads 2042397 Optimization and Evaluation of Different Pathways to Produce Biofuel from Biomass
Authors: Xiang Zheng, Zhaoping Zhong
Abstract:
In this study, Aspen Plus was used to simulate the whole process of biomass conversion to liquid fuel in different ways, and the main results of material and energy flow were obtained. The process optimization and evaluation were carried out on the four routes of cellulosic biomass pyrolysis gasification low-carbon olefin synthesis olefin oligomerization, biomass water pyrolysis and polymerization to jet fuel, biomass fermentation to ethanol, and biomass pyrolysis to liquid fuel. The environmental impacts of three biomass species (poplar wood, corn stover, and rice husk) were compared by the gasification synthesis pathway. The global warming potential, acidification potential, and eutrophication potential of the three biomasses were the same as those of rice husk > poplar wood > corn stover. In terms of human health hazard potential and solid waste potential, the results were poplar > rice husk > corn stover. In the popular pathway, 100 kg of poplar biomass was input to obtain 11.9 kg of aviation coal fraction and 6.3 kg of gasoline fraction. The energy conversion rate of the system was 31.6% when the output product energy included only the aviation coal product. In the basic process of hydrothermal depolymerization process, 14.41 kg aviation kerosene was produced per 100 kg biomass. The energy conversion rate of the basic process was 33.09%, which can be increased to 38.47% after the optimal utilization of lignin gasification and steam reforming for hydrogen production. The total exergy efficiency of the system increased from 30.48% to 34.43% after optimization, and the exergy loss mainly came from the concentration of precursor dilute solution. Global warming potential in environmental impact is mostly affected by the production process. Poplar wood was used as raw material in the process of ethanol production from cellulosic biomass. The simulation results showed that 827.4 kg of pretreatment mixture, 450.6 kg of fermentation broth, and 24.8 kg of ethanol were produced per 100 kg of biomass. The power output of boiler combustion reached 94.1 MJ, the unit power consumption in the process was 174.9 MJ, and the energy conversion rate was 33.5%. The environmental impact was mainly concentrated in the production process and agricultural processes. On the basis of the original biomass pyrolysis to liquid fuel, the enzymatic hydrolysis lignin residue produced by cellulose fermentation to produce ethanol was used as the pyrolysis raw material, and the fermentation and pyrolysis processes were coupled. In the coupled process, 24.8 kg ethanol and 4.78 kg upgraded liquid fuel were produced per 100 kg biomass with an energy conversion rate of 35.13%.Keywords: biomass conversion, biofuel, process optimization, life cycle assessment
Procedia PDF Downloads 702396 Daylightophil Approach towards High-Performance Architecture for Hybrid-Optimization of Visual Comfort and Daylight Factor in BSk
Authors: Mohammadjavad Mahdavinejad, Hadi Yazdi
Abstract:
The greatest influence we have from the world is shaped through the visual form, thus light is an inseparable element in human life. The use of daylight in visual perception and environment readability is an important issue for users. With regard to the hazards of greenhouse gas emissions from fossil fuels, and in line with the attitudes on the reduction of energy consumption, the correct use of daylight results in lower levels of energy consumed by artificial lighting, heating and cooling systems. Windows are usually the starting points for analysis and simulations to achieve visual comfort and energy optimization; therefore, attention should be paid to the orientation of buildings to minimize electrical energy and maximize the use of daylight. In this paper, by using the Design Builder Software, the effect of the orientation of an 18m2(3m*6m) room with 3m height in city of Tehran has been investigated considering the design constraint limitations. In these simulations, the dimensions of the building have been changed with one degree and the window is located on the smaller face (3m*3m) of the building with 80% ratio. The results indicate that the orientation of building has a lot to do with energy efficiency to meet high-performance architecture and planning goals and objectives.Keywords: daylight, window, orientation, energy consumption, design builder
Procedia PDF Downloads 2332395 A Constrained Neural Network Based Variable Neighborhood Search for the Multi-Objective Dynamic Flexible Job Shop Scheduling Problems
Authors: Aydin Teymourifar, Gurkan Ozturk, Ozan Bahadir
Abstract:
In this paper, a new neural network based variable neighborhood search is proposed for the multi-objective dynamic, flexible job shop scheduling problems. The neural network controls the problems' constraints to prevent infeasible solutions, while the Variable Neighborhood Search (VNS) applies moves, based on the critical block concept to improve the solutions. Two approaches are used for managing the constraints, in the first approach, infeasible solutions are modified according to the constraints, after the moves application, while in the second one, infeasible moves are prevented. Several neighborhood structures from the literature with some modifications, also new structures are used in the VNS. The suggested neighborhoods are more systematically defined and easy to implement. Comparison is done based on a multi-objective flexible job shop scheduling problem that is dynamic because of the jobs different release time and machines breakdowns. The results show that the presented method has better performance than the compared VNSs selected from the literature.Keywords: constrained optimization, neural network, variable neighborhood search, flexible job shop scheduling, dynamic multi-objective optimization
Procedia PDF Downloads 3462394 Optimization in the Compressive Strength of Iron Slag Self-Compacting Concrete
Authors: Luis E. Zapata, Sergio Ruiz, María F. Mantilla, Jhon A. Villamizar
Abstract:
Sand as fine aggregate for concrete production needs a feasible substitute due to several environmental issues. In this work, a study of the behavior of self-compacting concrete mixtures under replacement of sand by iron slag from 0.0% to 50.0% of weight and variations of water/cementitious material ratio between 0.3 and 0.5 is presented. Control fresh state tests of Slump flow, T500, J-ring and L-box were determined. In the hardened state, compressive strength was determined and optimization from response surface analysis was performed. The study of the variables in the hardened state was developed based on inferential statistical analyses using central composite design methodology and posterior analyses of variance (ANOVA). An increase in the compressive strength up to 50% higher than control mixtures at 7, 14, and 28 days of maturity was the most relevant result regarding the presence of iron slag as replacement of natural sand. Considering the obtained result, it is possible to infer that iron slag is an acceptable alternative replacement material of the natural fine aggregate to be used in structural concrete.Keywords: ANOVA, iron slag, response surface analysis, self-compacting concrete
Procedia PDF Downloads 1442393 A Prospective Evaluation of Thermal Radiation Effects on Magneto-Hydrodynamic Transport of a Nanofluid Traversing a Spongy Medium
Authors: Azad Hussain, Shoaib Ali, M. Y. Malik, Saba Nazir, Sarmad Jamal
Abstract:
This article reports a fundamental numerical investigation to analyze the impact of thermal radiations on MHD flow of differential type nanofluid past a porous plate. Here, viscosity is taken as function of temperature. Energy equation is deliberated in the existence of viscous dissipation. The mathematical terminologies of nano concentration, velocity and temperature are first cast into dimensionless expressions via suitable conversions and then solved by using Shooting technique to obtain the numerical solutions. Graphs has been plotted to check the convergence of constructed solutions. At the end, the influence of effective parameters on nanoparticle concentration, velocity and temperature fields are also deliberated in a comprehensive way. Moreover, the physical measures of engineering importance such as the Sherwood number, Skin friction and Nusselt number are also calculated. It is perceived that the thermal radiation enhances the temperature for both Vogel's and Reynolds' models but the normal stress parameter causes a reduction in temperature profile.Keywords: MHD flow, differential type nanofluid, Porous medium, variable viscosity, thermal radiation
Procedia PDF Downloads 2432392 Energy Consumption in Biodiesel Production at Various Kinetic Reaction of Transesterification
Authors: Sariah Abang, S. M. Anisuzzaman, Awang Bono, D. Krishnaiah, S. Rasmih
Abstract:
Biodiesel is a potential renewable energy due to biodegradable and non-toxic. The challenge of its commercialization is associated with high production cost due to its feedstock also useful in various food products. Non-competitive feedstock such as waste cooking oils normally contains a large amount of free fatty acids (FFAs). Large amount of fatty acid degrades the alkaline catalyst in the biodiesel production, thereby decreasing the biodiesel production rate. Generally, biodiesel production processes including esterification and trans-esterification are conducting in a mixed system, in which the hydrodynamic effect on the reaction could not be completely defined. The aim of this study was to investigate the effect of variation rate constant and activation energy on energy consumption of biodiesel production. Usually, the changes of rate constant and activation energy depend on the operating temperature and the degradation of catalyst. By varying the activation energy and kinetic rate constant, the effects can be seen on the energy consumption of biodiesel production. The result showed that the energy consumption of biodiesel is dependent on the changes of rate constant and activation energy. Furthermore, this study was simulated using Aspen HYSYS.Keywords: methanol, palm oil, simulation, transesterification, triolein
Procedia PDF Downloads 3202391 Optimization of Monitoring Networks for Air Quality Management in Urban Hotspots
Authors: Vethathirri Ramanujam Srinivasan, S. M. Shiva Nagendra
Abstract:
Air quality management in urban areas is a serious concern in both developed and developing countries. In this regard, more number of air quality monitoring stations are planned to mitigate air pollution in urban areas. In India, Central Pollution Control Board has set up 574 air quality monitoring stations across the country and proposed to set up another 500 stations in the next few years. The number of monitoring stations for each city has been decided based on population data. The setting up of ambient air quality monitoring stations and their operation and maintenance are highly expensive. Therefore, there is a need to optimize monitoring networks for air quality management. The present paper discusses the various methods such as Indian Standards (IS) method, US EPA method and European Union (EU) method to arrive at the minimum number of air quality monitoring stations. In addition, optimization of rain-gauge method and Inverse Distance Weighted (IDW) method using Geographical Information System (GIS) are also explored in the present work for the design of air quality network in Chennai city. In summary, additionally 18 stations are required for Chennai city, and the potential monitoring locations with their corresponding land use patterns are ranked and identified from the 1km x 1km sized grids.Keywords: air quality monitoring network, inverse distance weighted method, population based method, spatial variation
Procedia PDF Downloads 1892390 Least-Square Support Vector Machine for Characterization of Clusters of Microcalcifications
Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha
Abstract:
Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.Keywords: clusters of microcalcifications, ductal carcinoma in situ, least-square support vector machine, particle swarm optimization
Procedia PDF Downloads 3542389 Improved Blood Glucose-Insulin Monitoring with Dual-Layer Predictive Control Design
Authors: Vahid Nademi
Abstract:
In response to widely used wearable medical devices equipped with a continuous glucose monitor (CGM) and insulin pump, the advanced control methods are still demanding to get the full benefit of these devices. Unlike costly clinical trials, implementing effective insulin-glucose control strategies can provide significant contributions to the patients suffering from chronic diseases such as diabetes. This study deals with a key role of two-layer insulin-glucose regulator based on model-predictive-control (MPC) scheme so that the patient’s predicted glucose profile is in compliance with the insulin level injected through insulin pump automatically. It is achieved by iterative optimization algorithm which is called an integrated perturbation analysis and sequential quadratic programming (IPA-SQP) solver for handling uncertainties due to unexpected variations in glucose-insulin values and body’s characteristics. The feasibility evaluation of the discussed control approach is also studied by means of numerical simulations of two case scenarios via measured data. The obtained results are presented to verify the superior and reliable performance of the proposed control scheme with no negative impact on patient safety.Keywords: blood glucose monitoring, insulin pump, predictive control, optimization
Procedia PDF Downloads 1362388 Optimization of Geometric Parameters of Microfluidic Channels for Flow-Based Studies
Authors: Parth Gupta, Ujjawal Singh, Shashank Kumar, Mansi Chandra, Arnab Sarkar
Abstract:
Microfluidic devices have emerged as indispensable tools across various scientific disciplines, offering precise control and manipulation of fluids at the microscale. Their efficacy in flow-based research, spanning engineering, chemistry, and biology, relies heavily on the geometric design of microfluidic channels. This work introduces a novel approach to optimise these channels through Response Surface Methodology (RSM), departing from the conventional practice of addressing one parameter at a time. Traditionally, optimising microfluidic channels involved isolated adjustments to individual parameters, limiting the comprehensive understanding of their combined effects. In contrast, our approach considers the simultaneous impact of multiple parameters, employing RSM to efficiently explore the complex design space. The outcome is an innovative microfluidic channel that consumes an optimal sample volume and minimises flow time, enhancing overall efficiency. The relevance of geometric parameter optimization in microfluidic channels extends significantly in biomedical engineering. The flow characteristics of porous materials within these channels depend on many factors, including fluid viscosity, environmental conditions (such as temperature and humidity), and specific design parameters like sample volume, channel width, channel length, and substrate porosity. This intricate interplay directly influences the performance and efficacy of microfluidic devices, which, if not optimized, can lead to increased costs and errors in disease testing and analysis. In the context of biomedical applications, the proposed approach addresses the critical need for precision in fluid flow. it mitigate manufacturing costs associated with trial-and-error methodologies by optimising multiple geometric parameters concurrently. The resulting microfluidic channels offer enhanced performance and contribute to a streamlined, cost-effective process for testing and analyzing diseases. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing.Keywords: microfluidic device, minitab, statistical optimization, response surface methodology
Procedia PDF Downloads 682387 Modelling Water Usage for Farming
Authors: Ozgu Turgut
Abstract:
Water scarcity is a problem for many regions which requires immediate action, and solutions cannot be postponed for a long time. It is known that farming consumes a significant portion of usable water. Although in recent years, the efforts to make the transition to dripping or spring watering systems instead of using surface watering started to pay off. It is also known that this transition is not necessarily translated into an increase in the capacity dedicated to other water consumption channels such as city water or power usage. In order to control and allocate the water resource more purposefully, new watering systems have to be used with monitoring abilities that can limit the usage capacity for each farm. In this study, a decision support model which relies on a bi-objective stochastic linear optimization is proposed, which takes crop yield and price volatility into account. The model generates annual planting plans as well as water usage limits for each farmer in the region while taking the total value (i.e., profit) of the overall harvest. The mathematical model is solved using the L-shaped method optimally. The decision support model can be especially useful for regional administrations to plan next year's planting and water incomes and expenses. That is why not only a single optimum but also a set of representative solutions from the Pareto set is generated with the proposed approach.Keywords: decision support, farming, water, tactical planning, optimization, stochastic, pareto
Procedia PDF Downloads 732386 A Two-Stage Airport Ground Movement Speed Profile Design Methodology Using Particle Swarm Optimization
Authors: Zhang Tianci, Ding Meng, Zuo Hongfu, Zeng Lina, Sun Zejun
Abstract:
Automation of airport operations can greatly improve ground movement efficiency. In this paper, we study the speed profile design problem for advanced airport ground movement control and guidance. The problem is constrained by the surface four-dimensional trajectory generated in taxi planning. A decomposed approach of two stages is presented to solve this problem efficiently. In the first stage, speeds are allocated at control points which ensure smooth speed profiles can be found later. In the second stage, detailed speed profiles of each taxi interval are generated according to the allocated control point speeds with the objective of minimizing the overall fuel consumption. We present a swarm intelligence based algorithm for the first-stage problem and a discrete variable driven enumeration method for the second-stage problem since it only has a small set of discrete variables. Experimental results demonstrate the presented methodology performs well on real world speed profile design problems.Keywords: airport ground movement, fuel consumption, particle swarm optimization, smoothness, speed profile design
Procedia PDF Downloads 582