Search results for: modeling and optimization
5055 Energy Consumption Modeling for Strawberry Greenhouse Crop by Adaptive Nero Fuzzy Inference System Technique: A Case Study in Iran
Authors: Azar Khodabakhshi, Elham Bolandnazar
Abstract:
Agriculture as the most important food manufacturing sector is not only the energy consumer, but also is known as energy supplier. Using energy is considered as a helpful parameter for analyzing and evaluating the agricultural sustainability. In this study, the pattern of energy consumption of strawberry greenhouses of Jiroft in Kerman province of Iran was surveyed. The total input energy required in the strawberries production was calculated as 113314.71 MJ /ha. Electricity with 38.34% contribution of the total energy was considered as the most energy consumer in strawberry production. In this study, Neuro Fuzzy networks was used for function modeling in the production of strawberries. Results showed that the best model for predicting the strawberries function had a correlation coefficient, root mean square error (RMSE) and mean absolute percentage error (MAPE) equal to 0.9849, 0.0154 kg/ha and 0.11% respectively. Regards to these results, it can be said that Neuro Fuzzy method can be well predicted and modeled the strawberry crop function.Keywords: crop yield, energy, neuro-fuzzy method, strawberry
Procedia PDF Downloads 3815054 Identifying the Factors affecting on the Success of Energy Usage Saving in Municipality of Tehran
Authors: Rojin Bana Derakhshan, Abbas Toloie
Abstract:
For the purpose of optimizing and developing energy efficiency in building, it is required to recognize key elements of success in optimization of energy consumption before performing any actions. Surveying Principal Components is one of the most valuable result of Linear Algebra because the simple and non-parametric methods are become confusing. So that energy management system implemented according to energy management system international standard ISO50001:2011 and all energy parameters in building to be measured through performing energy auditing. In this essay by simulating used of data mining, the key impressive elements on energy saving in buildings to be determined. This approach is based on data mining statistical techniques using feature selection method and fuzzy logic and convert data from massive to compressed type and used to increase the selected feature. On the other side, influence portion and amount of each energy consumption elements in energy dissipation in percent are recognized as separated norm while using obtained results from energy auditing and after measurement of all energy consuming parameters and identified variables. Accordingly, energy saving solution divided into 3 categories, low, medium and high expense solutions.Keywords: energy saving, key elements of success, optimization of energy consumption, data mining
Procedia PDF Downloads 4685053 Steepest Descent Method with New Step Sizes
Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman
Abstract:
Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.Keywords: steepest descent, line search, iteration, running time, unconstrained optimization, convergence
Procedia PDF Downloads 5405052 Modeling of Oxygen Supply Profiles in Stirred-Tank Aggregated Stem Cells Cultivation Process
Authors: Vytautas Galvanauskas, Vykantas Grincas, Rimvydas Simutis
Abstract:
This paper investigates a possible practical solution for reasonable oxygen supply during the pluripotent stem cells expansion processes, where the stem cells propagate as aggregates in stirred-suspension bioreactors. Low glucose and low oxygen concentrations are preferred for efficient proliferation of pluripotent stem cells. However, strong oxygen limitation, especially inside of cell aggregates, can lead to cell starvation and death. In this research, the oxygen concentration profile inside of stem cell aggregates in a stem cell expansion process was predicted using a modified oxygen diffusion model. This profile can be realized during the stem cells cultivation process by manipulating the oxygen concentration in inlet gas or inlet gas flow. The proposed approach is relatively simple and may be attractive for installation in a real pluripotent stem cell expansion processes.Keywords: aggregated stem cells, dissolved oxygen profiles, modeling, stirred-tank, 3D expansion
Procedia PDF Downloads 3055051 Scheduling Residential Daily Energy Consumption Using Bi-criteria Optimization Methods
Authors: Li-hsing Shih, Tzu-hsun Yen
Abstract:
Because of the long-term commitment to net zero carbon emission, utility companies include more renewable energy supply, which generates electricity with time and weather restrictions. This leads to time-of-use electricity pricing to reflect the actual cost of energy supply. From an end-user point of view, better residential energy management is needed to incorporate the time-of-use prices and assist end users in scheduling their daily use of electricity. This study uses bi-criteria optimization methods to schedule daily energy consumption by minimizing the electricity cost and maximizing the comfort of end users. Different from most previous research, this study schedules users’ activities rather than household appliances to have better measures of users’ comfort/satisfaction. The relation between each activity and the use of different appliances could be defined by users. The comfort level is at the highest when the time and duration of an activity completely meet the user’s expectation, and the comfort level decreases when the time and duration do not meet expectations. A questionnaire survey was conducted to collect data for establishing regression models that describe users’ comfort levels when the execution time and duration of activities are different from user expectations. Six regression models representing the comfort levels for six types of activities were established using the responses to the questionnaire survey. A computer program is developed to evaluate electricity cost and the comfort level for each feasible schedule and then find the non-dominated schedules. The Epsilon constraint method is used to find the optimal schedule out of the non-dominated schedules. A hypothetical case is presented to demonstrate the effectiveness of the proposed approach and the computer program. Using the program, users can obtain the optimal schedule of daily energy consumption by inputting the intended time and duration of activities and the given time-of-use electricity prices.Keywords: bi-criteria optimization, energy consumption, time-of-use price, scheduling
Procedia PDF Downloads 605050 Optimization of Reliability and Communicability of a Random Two-Dimensional Point Patterns Using Delaunay Triangulation
Authors: Sopheak Sorn, Kwok Yip Szeto
Abstract:
Reliability is one of the important measures of how well the system meets its design objective, and mathematically is the probability that a complex system will perform satisfactorily. When the system is described by a network of N components (nodes) and their L connection (links), the reliability of the system becomes a network design problem that is an NP-hard combinatorial optimization problem. In this paper, we address the network design problem for a random point set’s pattern in two dimensions. We make use of a Voronoi construction with each cell containing exactly one point in the point pattern and compute the reliability of the Voronoi’s dual, i.e. the Delaunay graph. We further investigate the communicability of the Delaunay network. We find that there is a positive correlation and a negative correlation between the homogeneity of a Delaunay's degree distribution with its reliability and its communicability respectively. Based on the correlations, we alter the communicability and the reliability by performing random edge flips, which preserve the number of links and nodes in the network but can increase the communicability in a Delaunay network at the cost of its reliability. This transformation is later used to optimize a Delaunay network with the optimum geometric mean between communicability and reliability. We also discuss the importance of the edge flips in the evolution of real soap froth in two dimensions.Keywords: Communicability, Delaunay triangulation, Edge Flip, Reliability, Two dimensional network, Voronio
Procedia PDF Downloads 4195049 Integrated Modeling of Transformation of Electricity and Transportation Sectors: A Case Study of Australia
Authors: T. Aboumahboub, R. Brecha, H. B. Shrestha, U. F. Hutfilter, A. Geiges, W. Hare, M. Schaeffer, L. Welder, M. Gidden
Abstract:
The proposed stringent mitigation targets require an immediate start for a drastic transformation of the whole energy system. The current Australian energy system is mainly centralized and fossil fuel-based in most states with coal and gas-fired plants dominating the total produced electricity over the recent past. On the other hand, the country is characterized by a huge, untapped renewable potential, where wind and solar energy could play a key role in the decarbonization of the Australia’s future energy system. However, integrating high shares of such variable renewable energy sources (VRES) challenges the power system considerably due to their temporal fluctuations and geographical dispersion. This raises the concerns about flexibility gap in the system to ensure the security of supply with increasing shares of such intermittent sources. One main flexibility dimension to facilitate system integration of high shares of VRES is to increase the cross-sectoral integration through coupling of electricity to other energy sectors alongside the decarbonization of the power sector and reinforcement of the transmission grid. This paper applies a multi-sectoral energy system optimization model for Australia. We investigate the cost-optimal configuration of a renewable-based Australian energy system and its transformation pathway in line with the ambitious range of proposed climate change mitigation targets. We particularly analyse the implications of linking the electricity and transport sectors in a prospective, highly renewable Australian energy system.Keywords: decarbonization, energy system modelling, renewable energy, sector coupling
Procedia PDF Downloads 1335048 Solid-Liquid-Solid Interface of Yakam Matrix: Mathematical Modeling of the Contact Between an Aircraft Landing Gear and a Wet Pavement
Authors: Trudon Kabangu Mpinga, Ruth Mutala, Shaloom Mbambu, Yvette Kalubi Kashama, Kabeya Mukeba Yakasham
Abstract:
A mathematical model is developed to describe the contact dynamics between the landing gear wheels of an aircraft and a wet pavement during landing. The model is based on nonlinear partial differential equations, using the Yakam Matrix to account for the interaction between solid, liquid, and solid phases. This framework incorporates the influence of environmental factors, particularly water or rain on the runway, on braking performance and aircraft stability. Given the absence of exact analytical solutions, our approach enhances the understanding of key physical phenomena, including Coulomb friction forces, hydrodynamic effects, and the deformation of the pavement under the aircraft's load. Additionally, the dynamics of aquaplaning are simulated numerically to estimate the braking performance limits on wet surfaces, thereby contributing to strategies aimed at minimizing risk during landing on wet runways.Keywords: aircraft, modeling, simulation, yakam matrix, contact, wet runway
Procedia PDF Downloads 85047 Optimization of Strategies and Models Review for Optimal Technologies-Based on Fuzzy Schemes for Green Architecture
Authors: Ghada Elshafei, A. Elazim Negm
Abstract:
Recently, Green architecture becomes a significant way to a sustainable future. Green building designs involve finding the balance between comfortable homebuilding and sustainable environment. Moreover, the utilization of the new technologies such as artificial intelligence techniques are used to complement current practices in creating greener structures to keep the built environment more sustainable. The most common objectives are green buildings should be designed to minimize the overall impact of the built environment on ecosystems in general and particularly on human health and on the natural environment. This will lead to protecting occupant health, improving employee productivity, reducing pollution and sustaining the environmental. In green building design, multiple parameters which may be interrelated, contradicting, vague and of qualitative/quantitative nature are broaden to use. This paper presents a comprehensive critical state of art review of current practices based on fuzzy and its combination techniques. Also, presented how green architecture/building can be improved using the technologies that been used for analysis to seek optimal green solutions strategies and models to assist in making the best possible decision out of different alternatives.Keywords: green architecture/building, technologies, optimization, strategies, fuzzy techniques, models
Procedia PDF Downloads 4755046 Physical Modeling of Woodwind Ancient Greek Musical Instruments: The Case of Plagiaulos
Authors: Dimitra Marini, Konstantinos Bakogiannis, Spyros Polychronopoulos, Georgios Kouroupetroglou
Abstract:
Archaemusicology cannot entirely depend on the study of the excavated ancient musical instruments as most of the time their condition is not ideal (i.e., missing/eroded parts) and moreover, because of the concern damaging the originals during the experiments. Researchers, in order to overcome the above obstacles, build replicas. This technique is still the most popular one, although it is rather expensive and time-consuming. Throughout the last decades, the development of physical modeling techniques has provided tools that enable the study of musical instruments through their digitally simulated models. This is not only a more cost and time-efficient technique but also provides additional flexibility as the user can easily modify parameters such as their geometrical features and materials. This paper thoroughly describes the steps to create a physical model of a woodwind ancient Greek instrument, Plagiaulos. This instrument could be considered as the ancestor of the modern flute due to the common geometry and air-jet excitation mechanism. Plagiaulos is comprised of a single resonator with an open end and a number of tone holes. The combination of closed and open tone holes produces the pitch variations. In this work, the effects of all the instrument’s components are described by means of physics and then simulated based on digital waveguides. The synthesized sound of the proposed model complies with the theory, highlighting its validity. Further, the synthesized sound of the model simulating the Plagiaulos of Koile (2nd century BCE) was compared with its replica build in our laboratory by following the scientific methodologies of archeomusicology. The aforementioned results verify that robust dynamic digital tools can be introduced in the field of computational, experimental archaemusicology.Keywords: archaeomusicology, digital waveguides, musical acoustics, physical modeling
Procedia PDF Downloads 1135045 Assessing the Effectiveness of Machine Learning Algorithms for Cyber Threat Intelligence Discovery from the Darknet
Authors: Azene Zenebe
Abstract:
Deep learning is a subset of machine learning which incorporates techniques for the construction of artificial neural networks and found to be useful for modeling complex problems with large dataset. Deep learning requires a very high power computational and longer time for training. By aggregating computing power, high performance computer (HPC) has emerged as an approach to resolving advanced problems and performing data-driven research activities. Cyber threat intelligence (CIT) is actionable information or insight an organization or individual uses to understand the threats that have, will, or are currently targeting the organization. Results of review of literature will be presented along with results of experimental study that compares the performance of tree-based and function-base machine learning including deep learning algorithms using secondary dataset collected from darknet.Keywords: deep-learning, cyber security, cyber threat modeling, tree-based machine learning, function-based machine learning, data science
Procedia PDF Downloads 1545044 Application of Causal Inference and Discovery in Curriculum Evaluation and Continuous Improvement
Authors: Lunliang Zhong, Bin Duan
Abstract:
The undergraduate graduation project is a vital part of the higher education curriculum, crucial for engineering accreditation. Current evaluations often summarize data without identifying underlying issues. This study applies the Peter-Clark algorithm to analyze causal relationships within the graduation project data of an Electronics and Information Engineering program, creating a causal model. Structural equation modeling confirmed the model's validity. The analysis reveals key teaching stages affecting project success, uncovering problems in the process. Introducing causal discovery and inference into project evaluation helps identify issues and propose targeted improvement measures. The effectiveness of these measures is validated by comparing the learning outcomes of two student cohorts, stratified by confounding factors, leading to improved teaching quality.Keywords: causal discovery, causal inference, continuous improvement, Peter-Clark algorithm, structural equation modeling
Procedia PDF Downloads 185043 Optimization of Assay Parameters of L-Glutaminase from Bacillus cereus MTCC1305 Using Artificial Neural Network
Authors: P. Singh, R. M. Banik
Abstract:
Artificial neural network (ANN) was employed to optimize assay parameters viz., time, temperature, pH of reaction mixture, enzyme volume and substrate concentration of L-glutaminase from Bacillus cereus MTCC 1305. ANN model showed high value of coefficient of determination (0.9999), low value of root mean square error (0.6697) and low value of absolute average deviation. A multilayer perceptron neural network trained with an error back-propagation algorithm was incorporated for developing a predictive model and its topology was obtained as 5-3-1 after applying Levenberg Marquardt (LM) training algorithm. The predicted activity of L-glutaminase was obtained as 633.7349 U/l by considering optimum assay parameters, viz., pH of reaction mixture (7.5), reaction time (20 minutes), incubation temperature (35˚C), substrate concentration (40mM), and enzyme volume (0.5ml). The predicted data was verified by running experiment at simulated optimum assay condition and activity was obtained as 634.00 U/l. The application of ANN model for optimization of assay conditions improved the activity of L-glutaminase by 1.499 fold.Keywords: Bacillus cereus, L-glutaminase, assay parameters, artificial neural network
Procedia PDF Downloads 4295042 The Impact of Gamification on Self-Assessment for English Language Learners in Saudi Arabia
Authors: Wala A. Bagunaid, Maram Meccawy, Arwa Allinjawi, Zilal Meccawy
Abstract:
Continuous self-assessment becomes crucial in self-paced online learning environments. Students often depend on themselves to assess their progress; which is considered an essential requirement for any successful learning process. Today’s education institutions face major problems around student motivation and engagement. Thus, personalized e-learning systems aim to help and guide the students. Gamification provides an opportunity to help students for self-assessment and social comparison with other students through attempting to harness the motivational power of games and apply it to the learning environment. Furthermore, Open Social Student Modeling (OSSM) as considered as the latest user modeling technologies is believed to improve students’ self-assessment and to allow them to social comparison with other students. This research integrates OSSM approach and gamification concepts in order to provide self-assessment for English language learners at King Abdulaziz University (KAU). This is achieved through an interactive visual representation of their learning progress.Keywords: e-learning system, gamification, motivation, social comparison, visualization
Procedia PDF Downloads 1525041 Multi-Objective Electric Vehicle Charge Coordination for Economic Network Management under Uncertainty
Authors: Ridoy Das, Myriam Neaimeh, Yue Wang, Ghanim Putrus
Abstract:
Electric vehicles are a popular transportation medium renowned for potential environmental benefits. However, large and uncontrolled charging volumes can impact distribution networks negatively. Smart charging is widely recognized as an efficient solution to achieve both improved renewable energy integration and grid relief. Nevertheless, different decision-makers may pursue diverse and conflicting objectives. In this context, this paper proposes a multi-objective optimization framework to control electric vehicle charging to achieve both energy cost reduction and peak shaving. A weighted-sum method is developed due to its intuitiveness and efficiency. Monte Carlo simulations are implemented to investigate the impact of uncertain electric vehicle driving patterns and provide decision-makers with a robust outcome in terms of prospective cost and network loading. The results demonstrate that there is a conflict between energy cost efficiency and peak shaving, with the decision-makers needing to make a collaborative decision.Keywords: electric vehicles, multi-objective optimization, uncertainty, mixed integer linear programming
Procedia PDF Downloads 1795040 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 1775039 Machine Learning Assisted Performance Optimization in Memory Tiering
Authors: Derssie Mebratu
Abstract:
As a large variety of micro services, web services, social graphic applications, and media applications are continuously developed, it is substantially vital to design and build a reliable, efficient, and faster memory tiering system. Despite limited design, implementation, and deployment in the last few years, several techniques are currently developed to improve a memory tiering system in a cloud. Some of these techniques are to develop an optimal scanning frequency; improve and track pages movement; identify pages that recently accessed; store pages across each tiering, and then identify pages as a hot, warm, and cold so that hot pages can store in the first tiering Dynamic Random Access Memory (DRAM) and warm pages store in the second tiering Compute Express Link(CXL) and cold pages store in the third tiering Non-Volatile Memory (NVM). Apart from the current proposal and implementation, we also develop a new technique based on a machine learning algorithm in that the throughput produced 25% improved performance compared to the performance produced by the baseline as well as the latency produced 95% improved performance compared to the performance produced by the baseline.Keywords: machine learning, bayesian optimization, memory tiering, CXL, DRAM
Procedia PDF Downloads 965038 A Computational Diagnostics for Dielectric Barrier Discharge Plasma
Authors: Zainab D. Abd Ali, Thamir H. Khalaf
Abstract:
In this paper, the characteristics of electric discharge in gap between two (parallel-plate) dielectric plates are studies, the gap filled with Argon gas in atm pressure at ambient temperature, the thickness of gap typically less than 1 mm and dielectric may be up 10 cm in diameter. One of dielectric plates a sinusoidal voltage is applied with Rf frequency, the other plates is electrically grounded. The simulation in this work depending on Boltzmann equation solver in first few moments, fluid model and plasma chemistry, in one dimensional modeling. This modeling have insight into characteristics of Dielectric Barrier Discharge through studying properties of breakdown of gas, electric field, electric potential, and calculating electron density, mean electron energy, electron current density ,ion current density, total plasma current density. The investigation also include: 1. The influence of change in thickness of gap between two plates if we doubled or reduced gap to half. 2. The effect of thickness of dielectric plates. 3. The influence of change in type and properties of dielectric material (gass, silicon, Teflon).Keywords: computational diagnostics, Boltzmann equation, electric discharge, electron density
Procedia PDF Downloads 7775037 Realistic Modeling of the Preclinical Small Animal Using Commercial Software
Authors: Su Chul Han, Seungwoo Park
Abstract:
As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.Keywords: mimics, preclinical small animal, segmentation, 3D printer
Procedia PDF Downloads 3665036 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard
Procedia PDF Downloads 5255035 Building Capacity and Personnel Flow Modeling for Operating amid COVID-19
Authors: Samuel Fernandes, Dylan Kato, Emin Burak Onat, Patrick Keyantuo, Raja Sengupta, Amine Bouzaghrane
Abstract:
The COVID-19 pandemic has spread across the United States, forcing cities to impose stay-at-home and shelter-in-place orders. Building operations had to adjust as non-essential personnel worked from home. But as buildings prepare for personnel to return, they need to plan for safe operations amid new COVID-19 guidelines. In this paper we propose a methodology for capacity and flow modeling of personnel within buildings to safely operate under COVID-19 guidelines. We model personnel flow within buildings by network flows with queuing constraints. We study maximum flow, minimum cost, and minimax objectives. We compare our network flow approach with a simulation model through a case study and present the results. Our results showcase various scenarios of how buildings could be operated under new COVID-19 guidelines and provide a framework for building operators to plan and operate buildings in this new paradigm.Keywords: network analysis, building simulation, COVID-19
Procedia PDF Downloads 1605034 Long-Term Results of Coronary Bifurcation Stenting with Drug Eluting Stents
Authors: Piotr Muzyk, Beata Morawiec, Mariusz Opara, Andrzej Tomasik, Brygida Przywara-Chowaniec, Wojciech Jachec, Ewa Nowalany-Kozielska, Damian Kawecki
Abstract:
Background: Coronary bifurcation is one of the most complex lesion in patients with coronary ar-tery disease. Provisional T-stenting is currently one of the recommended techniques. The aim was to assess optimal methods of treatment in the era of drug-eluting stents (DES). Methods: The regis-try consisted of data from 1916 patients treated with coronary percutaneous interventions (PCI) using either first- or second-generation DES. Patients with bifurcation lesion entered the analysis. Major adverse cardiac and cardiovascular events (MACCE) were assessed at one year of follow-up and comprised of death, acute myocardial infarction (AMI), repeated PCI (re-PCI) of target ves-sel and stroke. Results: Of 1916 registry patients, 204 patients (11%) were diagnosed with bifurcation lesion >50% and entered the analysis. The most commonly used technique was provi-sional T-stenting (141 patients, 69%). Optimization with kissing-balloons technique was performed in 45 patients (22%). In 59 patients (29%) second-generation DES was implanted, while in 112 pa-tients (55%), first-generation DES was used. In 33 patients (16%) both types of DES were used. The procedure success rate (TIMI 3 flow) was achieved in 98% of patients. In one-year follow-up, there were 39 MACCE (19%) (9 deaths, 17 AMI, 16 re-PCI and 5 strokes). Provisional T-stenting resulted in similar rate of MACCE to other techniques (16% vs. 5%, p=0.27) and similar occurrence of re-PCI (6% vs. 2%, p=0.78). The results of post-PCI kissing-balloon technique gave equal out-comes with 3% vs. 16% of MACCE in patients in whom no optimization technique was used (p=0.39). The type of implanted DES (second- vs. first-generation) had no influence on MACCE (4% vs 14%, respectively, p=0.12) and re-PCI (1.7% vs. 51% patients, respectively, p=0.28). Con-clusions: The treatment of bifurcation lesions with PCI represent high-risk procedures with high rate of MACCE. Stenting technique, optimization of PCI and the generation of implanted stent should be personalized for each case to balance risk of the procedure. In this setting, the operator experience might be the factor of better outcome, which should be further investigated.Keywords: coronary bifurcation, drug eluting stents, long-term follow-up, percutaneous coronary interventions
Procedia PDF Downloads 2045033 Optimization and Evaluation of Different Pathways to Produce Biofuel from Biomass
Authors: Xiang Zheng, Zhaoping Zhong
Abstract:
In this study, Aspen Plus was used to simulate the whole process of biomass conversion to liquid fuel in different ways, and the main results of material and energy flow were obtained. The process optimization and evaluation were carried out on the four routes of cellulosic biomass pyrolysis gasification low-carbon olefin synthesis olefin oligomerization, biomass water pyrolysis and polymerization to jet fuel, biomass fermentation to ethanol, and biomass pyrolysis to liquid fuel. The environmental impacts of three biomass species (poplar wood, corn stover, and rice husk) were compared by the gasification synthesis pathway. The global warming potential, acidification potential, and eutrophication potential of the three biomasses were the same as those of rice husk > poplar wood > corn stover. In terms of human health hazard potential and solid waste potential, the results were poplar > rice husk > corn stover. In the popular pathway, 100 kg of poplar biomass was input to obtain 11.9 kg of aviation coal fraction and 6.3 kg of gasoline fraction. The energy conversion rate of the system was 31.6% when the output product energy included only the aviation coal product. In the basic process of hydrothermal depolymerization process, 14.41 kg aviation kerosene was produced per 100 kg biomass. The energy conversion rate of the basic process was 33.09%, which can be increased to 38.47% after the optimal utilization of lignin gasification and steam reforming for hydrogen production. The total exergy efficiency of the system increased from 30.48% to 34.43% after optimization, and the exergy loss mainly came from the concentration of precursor dilute solution. Global warming potential in environmental impact is mostly affected by the production process. Poplar wood was used as raw material in the process of ethanol production from cellulosic biomass. The simulation results showed that 827.4 kg of pretreatment mixture, 450.6 kg of fermentation broth, and 24.8 kg of ethanol were produced per 100 kg of biomass. The power output of boiler combustion reached 94.1 MJ, the unit power consumption in the process was 174.9 MJ, and the energy conversion rate was 33.5%. The environmental impact was mainly concentrated in the production process and agricultural processes. On the basis of the original biomass pyrolysis to liquid fuel, the enzymatic hydrolysis lignin residue produced by cellulose fermentation to produce ethanol was used as the pyrolysis raw material, and the fermentation and pyrolysis processes were coupled. In the coupled process, 24.8 kg ethanol and 4.78 kg upgraded liquid fuel were produced per 100 kg biomass with an energy conversion rate of 35.13%.Keywords: biomass conversion, biofuel, process optimization, life cycle assessment
Procedia PDF Downloads 705032 Daylightophil Approach towards High-Performance Architecture for Hybrid-Optimization of Visual Comfort and Daylight Factor in BSk
Authors: Mohammadjavad Mahdavinejad, Hadi Yazdi
Abstract:
The greatest influence we have from the world is shaped through the visual form, thus light is an inseparable element in human life. The use of daylight in visual perception and environment readability is an important issue for users. With regard to the hazards of greenhouse gas emissions from fossil fuels, and in line with the attitudes on the reduction of energy consumption, the correct use of daylight results in lower levels of energy consumed by artificial lighting, heating and cooling systems. Windows are usually the starting points for analysis and simulations to achieve visual comfort and energy optimization; therefore, attention should be paid to the orientation of buildings to minimize electrical energy and maximize the use of daylight. In this paper, by using the Design Builder Software, the effect of the orientation of an 18m2(3m*6m) room with 3m height in city of Tehran has been investigated considering the design constraint limitations. In these simulations, the dimensions of the building have been changed with one degree and the window is located on the smaller face (3m*3m) of the building with 80% ratio. The results indicate that the orientation of building has a lot to do with energy efficiency to meet high-performance architecture and planning goals and objectives.Keywords: daylight, window, orientation, energy consumption, design builder
Procedia PDF Downloads 2335031 A Constrained Neural Network Based Variable Neighborhood Search for the Multi-Objective Dynamic Flexible Job Shop Scheduling Problems
Authors: Aydin Teymourifar, Gurkan Ozturk, Ozan Bahadir
Abstract:
In this paper, a new neural network based variable neighborhood search is proposed for the multi-objective dynamic, flexible job shop scheduling problems. The neural network controls the problems' constraints to prevent infeasible solutions, while the Variable Neighborhood Search (VNS) applies moves, based on the critical block concept to improve the solutions. Two approaches are used for managing the constraints, in the first approach, infeasible solutions are modified according to the constraints, after the moves application, while in the second one, infeasible moves are prevented. Several neighborhood structures from the literature with some modifications, also new structures are used in the VNS. The suggested neighborhoods are more systematically defined and easy to implement. Comparison is done based on a multi-objective flexible job shop scheduling problem that is dynamic because of the jobs different release time and machines breakdowns. The results show that the presented method has better performance than the compared VNSs selected from the literature.Keywords: constrained optimization, neural network, variable neighborhood search, flexible job shop scheduling, dynamic multi-objective optimization
Procedia PDF Downloads 3465030 Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network
Authors: Souhir Ben Amor, Heni Boubaker, Lotfi Belkacem
Abstract:
This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.Keywords: electricity price, k-factor GARMA, LLWNN, G-GARCH, forecasting
Procedia PDF Downloads 2315029 PM10 Prediction and Forecasting Using CART: A Case Study for Pleven, Bulgaria
Authors: Snezhana G. Gocheva-Ilieva, Maya P. Stoimenova
Abstract:
Ambient air pollution with fine particulate matter (PM10) is a systematic permanent problem in many countries around the world. The accumulation of a large number of measurements of both the PM10 concentrations and the accompanying atmospheric factors allow for their statistical modeling to detect dependencies and forecast future pollution. This study applies the classification and regression trees (CART) method for building and analyzing PM10 models. In the empirical study, average daily air data for the city of Pleven, Bulgaria for a period of 5 years are used. Predictors in the models are seven meteorological variables, time variables, as well as lagged PM10 variables and some lagged meteorological variables, delayed by 1 or 2 days with respect to the initial time series, respectively. The degree of influence of the predictors in the models is determined. The selected best CART models are used to forecast future PM10 concentrations for two days ahead after the last date in the modeling procedure and show very accurate results.Keywords: cross-validation, decision tree, lagged variables, short-term forecasting
Procedia PDF Downloads 1945028 Modeling Of The Random Impingement Erosion Due To The Impact Of The Solid Particles
Authors: Siamack A. Shirazi, Farzin Darihaki
Abstract:
Solid particles could be found in many multiphase flows, including transport pipelines and pipe fittings. Such particles interact with the pipe material and cause erosion which threats the integrity of the system. Therefore, predicting the erosion rate is an important factor in the design and the monitor of such systems. Mechanistic models can provide reliable predictions for many conditions while demanding only relatively low computational cost. Mechanistic models utilize a representative particle trajectory to predict the impact characteristics of the majority of the particle impacts that cause maximum erosion rate in the domain. The erosion caused by particle impacts is not only due to the direct impacts but also random impingements. In the present study, an alternative model has been introduced to describe the erosion due to random impingement of particles. The present model provides a realistic trend for erosion with changes in the particle size and particle Stokes number. The present model is examined against the experimental data and CFD simulation results and indicates better agreement with the data incomparison to the available models in the literature.Keywords: erosion, mechanistic modeling, particles, multiphase flow, gas-liquid-solid
Procedia PDF Downloads 1695027 Approaches to Valuing Ecosystem Services in Agroecosystems From the Perspectives of Ecological Economics and Agroecology
Authors: Sandra Cecilia Bautista-Rodríguez, Vladimir Melgarejo
Abstract:
Climate change, loss of ecosystems, increasing poverty, increasing marginalization of rural communities and declining food security are global issues that require urgent attention. In this regard, a great deal of research has focused on how agroecosystems respond to these challenges as they provide ecosystem services (ES) that lead to higher levels of resilience, adaptation, productivity and self-sufficiency. Hence, the valuing of ecosystem services plays an important role in the decision-making process for the design and management of agroecosystems. This paper aims to define the link between ecosystem service valuation methods and ES value dimensions in agroecosystems from ecological economics and agroecology. The method used to identify valuation methodologies was a literature review in the fields of Agroecology and Ecological Economics, based on a strategy of information search and classification. The conceptual framework of the work is based on the multidimensionality of value, considering the social, ecological, political, technological and economic dimensions. Likewise, the valuation process requires consideration of the ecosystem function associated with ES, such as regulation, habitat, production and information functions. In this way, valuation methods for ES in agroecosystems can integrate more than one value dimension and at least one ecosystem function. The results allow correlating the ecosystem functions with the ecosystem services valued, and the specific tools or models used, the dimensions and valuation methods. The main methodologies identified are multi-criteria valuation (1), deliberative - consultative valuation (2), valuation based on system dynamics modeling (3), valuation through energy or biophysical balances (4), valuation through fuzzy logic modeling (5), valuation based on agent-based modeling (6). Amongst the main conclusions, it is highlighted that the system dynamics modeling approach has a high potential for development in valuation processes, due to its ability to integrate other methods, especially multi-criteria valuation and energy and biophysical balances, to describe through causal cycles the interrelationships between ecosystem services, the dimensions of value in agroecosystems, thus showing the relationships between the value of ecosystem services and the welfare of communities. As for methodological challenges, it is relevant to achieve the integration of tools and models provided by different methods, to incorporate the characteristics of a complex system such as the agroecosystem, which allows reducing the limitations in the processes of valuation of ES.Keywords: ecological economics, agroecosystems, ecosystem services, valuation of ecosystem services
Procedia PDF Downloads 1235026 Optimization in the Compressive Strength of Iron Slag Self-Compacting Concrete
Authors: Luis E. Zapata, Sergio Ruiz, María F. Mantilla, Jhon A. Villamizar
Abstract:
Sand as fine aggregate for concrete production needs a feasible substitute due to several environmental issues. In this work, a study of the behavior of self-compacting concrete mixtures under replacement of sand by iron slag from 0.0% to 50.0% of weight and variations of water/cementitious material ratio between 0.3 and 0.5 is presented. Control fresh state tests of Slump flow, T500, J-ring and L-box were determined. In the hardened state, compressive strength was determined and optimization from response surface analysis was performed. The study of the variables in the hardened state was developed based on inferential statistical analyses using central composite design methodology and posterior analyses of variance (ANOVA). An increase in the compressive strength up to 50% higher than control mixtures at 7, 14, and 28 days of maturity was the most relevant result regarding the presence of iron slag as replacement of natural sand. Considering the obtained result, it is possible to infer that iron slag is an acceptable alternative replacement material of the natural fine aggregate to be used in structural concrete.Keywords: ANOVA, iron slag, response surface analysis, self-compacting concrete
Procedia PDF Downloads 144