Search results for: discrete optimization
2411 Upon One Smoothing Problem in Project Management
Authors: Dimitri Golenko-Ginzburg
Abstract:
A CPM network project with deterministic activity durations, in which activities require homogenous resources with fixed capacities, is considered. The problem is to determine the optimal schedule of starting times for all network activities within their maximal allowable limits (in order not to exceed the network's critical time) to minimize the maximum required resources for the project at any point in time. In case when a non-critical activity may start only at discrete moments with the pregiven time span, the problem becomes NP-complete and an optimal solution may be obtained via a look-over algorithm. For the case when a look-over requires much computational time an approximate algorithm is suggested. The algorithm's performance ratio, i.e., the relative accuracy error, is determined. Experimentation has been undertaken to verify the suggested algorithm.Keywords: resource smoothing problem, CPM network, lookover algorithm, lexicographical order, approximate algorithm, accuracy estimate
Procedia PDF Downloads 3022410 A Study of Hamilton-Jacobi-Bellman Equation Systems Arising in Differential Game Models of Changing Society
Authors: Weihua Ruan, Kuan-Chou Chen
Abstract:
This paper is concerned with a system of Hamilton-Jacobi-Bellman equations coupled with an autonomous dynamical system. The mathematical system arises in the differential game formulation of political economy models as an infinite-horizon continuous-time differential game with discounted instantaneous payoff rates and continuously and discretely varying state variables. The existence of a weak solution of the PDE system is proven and a computational scheme of approximate solution is developed for a class of such systems. A model of democratization is mathematically analyzed as an illustration of application.Keywords: Hamilton-Jacobi-Bellman equations, infinite-horizon differential games, continuous and discrete state variables, political-economy models
Procedia PDF Downloads 3772409 Drying Modeling of Banana Using Cellular Automata
Authors: M. Fathi, Z. Farhaninejad, M. Shahedi, M. Sadeghi
Abstract:
Drying is one of the oldest preservation methods for food and agriculture products. Appropriate control of operation can be obtained by modeling. Limitation of continues models for complex boundary condition and non-regular geometries leading to appearance of discrete novel methods such as cellular automata, which provides a platform for obtaining fast predictions by rule-based mathematics. In this research a one D dimensional CA was used for simulating thin layer drying of banana. Banana slices were dried with a convectional air dryer and experimental data were recorded for validating of final model. The model was programmed by MATLAB, run for 70000 iterations and von-Neumann neighborhood. The validation results showed a good accordance between experimental and predicted data (R=0.99). Cellular automata are capable to reproduce the expected pattern of drying and have a powerful potential for solving physical problems with reasonable accuracy and low calculating resources.Keywords: banana, cellular automata, drying, modeling
Procedia PDF Downloads 4382408 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups
Authors: Naushad Mamode Khan
Abstract:
The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood based estimating methodology. The joint generalized quasilikelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQLIII) that are based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.Keywords: longitudinal, com-Poisson, ill-conditioned, INAR(1), GLMS, GQL
Procedia PDF Downloads 3542407 Financial Assets Return, Economic Factors and Investor's Behavioral Indicators Relationships Modeling: A Bayesian Networks Approach
Authors: Nada Souissi, Mourad Mroua
Abstract:
The main purpose of this study is to examine the interaction between financial asset volatility, economic factors and investor's behavioral indicators related to both the company's and the markets stocks for the period from January 2000 to January2020. Using multiple linear regression and Bayesian Networks modeling, results show a positive and negative relationship between investor's psychology index, economic factors and predicted stock market return. We reveal that the application of the Bayesian Discrete Network contributes to identify the different cause and effect relationships between all economic, financial variables and psychology index.Keywords: Financial asset return predictability, Economic factors, Investor's psychology index, Bayesian approach, Probabilistic networks, Parametric learning
Procedia PDF Downloads 1492406 Optimal Location of the I/O Point in the Parking System
Authors: Jing Zhang, Jie Chen
Abstract:
In this paper, we deal with the optimal I/O point location in an automated parking system. In this system, the S/R machine (storage and retrieve machine) travels independently in vertical and horizontal directions. Based on the characteristics of the parking system and the basic principle of AS/RS system (Automated Storage and Retrieval System), we obtain the continuous model in units of time. For the single command cycle using the randomized storage policy, we calculate the probability density function for the system travel time and thus we develop the travel time model. And we confirm that the travel time model shows a good performance by comparing with discrete case. Finally in this part, we establish the optimal model by minimizing the expected travel time model and it is shown that the optimal location of the I/O point is located at the middle of the left-hand above corner.Keywords: parking system, optimal location, response time, S/R machine
Procedia PDF Downloads 4092405 Interval Bilevel Linear Fractional Programming
Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi
Abstract:
The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients
Procedia PDF Downloads 4462404 Significant Reduction in Specific CO₂ Emission through Process Optimization at G Blast Furnace, Tata Steel Jamshedpur
Authors: Shoumodip Roy, Ankit Singhania, M. K. G. Choudhury, Santanu Mallick, M. K. Agarwal, R. V. Ramna, Uttam Singh
Abstract:
One of the key corporate goals of Tata Steel company is to demonstrate Environment Leadership. Decreasing specific CO₂ emission is one of the key steps to achieve the stated corporate goal. At any Blast Furnace, specific CO₂ emission is directly proportional to fuel intake. To reduce the fuel intake at G Blast Furnace, an initial benchmarking exercise was carried out with international and domestic Blast Furnaces to determine the potential for improvement. The gap identified during the exercise revealed that the benchmark Blast Furnaces operated with superior raw material quality than that in G Blast Furnace. However, since the raw materials to G Blast Furnace are sourced from the captive mines, improvement in the raw material quality was out of scope. Therefore, trials were taken with different operating regimes, to identify the key process parameters, which on optimization could significantly reduce the fuel intake in G Blast Furnace. The key process parameters identified from the trial were the Stoichiometric Oxygen Ratio, Melting Capacity ratio and the burden distribution inside the furnace. These identified process parameters were optimized to bridge the gap in fuel intake at G Blast Furnace, thereby reducing specific CO₂ emission to benchmark levels. This paradigm shift enabled to lower the fuel intake by 70kg per ton of liquid iron produced, thereby reducing the specific CO₂ emission by 15 percent.Keywords: benchmark, blast furnace, CO₂ emission, fuel rate
Procedia PDF Downloads 2802403 Electricity Sector's Status in Lebanon and Portfolio Optimization for the Future Electricity Generation Scenarios
Authors: Nour Wehbe
Abstract:
The Lebanese electricity sector is at the heart of a deep crisis. Electricity in Lebanon is supplied by Électricité du Liban (EdL) which has to suffer from technical and financial deficiencies for decades and proved to be insufficient and deficient as the demand still exceeds the supply. As a result, backup generation is widespread throughout Lebanon. The sector costs massive government resources and, on top of it, consumers pay massive additional amounts for satisfying their electrical needs. While the developed countries have been investing in renewable energy for the past two decades, the Lebanese government realizes the importance of adopting such energy sourcing strategies for the upgrade of the electricity sector in the country. The diversification of the national electricity generation mix has increased considerably in Lebanon's energy planning agenda, especially that a detailed review of the energy potential in Lebanon has revealed a great potential of solar and wind energy resources, a considerable potential of biomass resource, and an important hydraulic potential in Lebanon. This paper presents a review of the energy status of Lebanon, and illustrates a detailed review of the EDL structure with the existing problems and recommended solutions. In addition, scenarios reflecting implementation of policy projects are presented, and conclusions are drawn on the usefulness of a proposed evaluation methodology and the effectiveness of the adopted new energy policy for the electrical sector in Lebanon.Keywords: EdL Electricite du Liban, portfolio optimization, electricity generation mix, mean-variance approach
Procedia PDF Downloads 2482402 Model Updating Based on Modal Parameters Using Hybrid Pattern Search Technique
Authors: N. Guo, C. Xu, Z. C. Yang
Abstract:
In order to ensure the high reliability of an aircraft, the accurate structural dynamics analysis has become an indispensable part in the design of an aircraft structure. Therefore, the structural finite element model which can be used to accurately calculate the structural dynamics and their transfer relations is the prerequisite in structural dynamic design. A dynamic finite element model updating method is presented to correct the uncertain parameters of the finite element model of a structure using measured modal parameters. The coordinate modal assurance criterion is used to evaluate the correlation level at each coordinate over the experimental and the analytical mode shapes. Then, the weighted summation of the natural frequency residual and the coordinate modal assurance criterion residual is used as the objective function. Moreover, the hybrid pattern search (HPS) optimization technique, which synthesizes the advantages of pattern search (PS) optimization technique and genetic algorithm (GA), is introduced to solve the dynamic FE model updating problem. A numerical simulation and a model updating experiment for GARTEUR aircraft model are performed to validate the feasibility and effectiveness of the present dynamic model updating method, respectively. The updated results show that the proposed method can be successfully used to modify the incorrect parameters with good robustness.Keywords: model updating, modal parameter, coordinate modal assurance criterion, hybrid genetic/pattern search
Procedia PDF Downloads 1612401 ANFIS Approach for Locating Faults in Underground Cables
Authors: Magdy B. Eteiba, Wael Ismael Wahba, Shimaa Barakat
Abstract:
This paper presents a fault identification, classification and fault location estimation method based on Discrete Wavelet Transform and Adaptive Network Fuzzy Inference System (ANFIS) for medium voltage cable in the distribution system. Different faults and locations are simulated by ATP/EMTP, and then certain selected features of the wavelet transformed signals are used as an input for a training process on the ANFIS. Then an accurate fault classifier and locator algorithm was designed, trained and tested using current samples only. The results obtained from ANFIS output were compared with the real output. From the results, it was found that the percentage error between ANFIS output and real output is less than three percent. Hence, it can be concluded that the proposed technique is able to offer high accuracy in both of the fault classification and fault location.Keywords: ANFIS, fault location, underground cable, wavelet transform
Procedia PDF Downloads 5132400 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization
Authors: Soheila Sadeghi
Abstract:
Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction
Procedia PDF Downloads 592399 Tower Crane Selection and Positioning on Construction Sites
Authors: Dirk Briskorn, Michael Dienstknecht
Abstract:
Cranes are a key element in construction projects as they are the primary lifting equipment and among the most expensive construction equipment. Thus, selecting cranes and locating them on-site is an important factor for a project's profitability. We focus on a site with supply and demand areas that have to be connected by tower cranes. There are several types of tower cranes differing in certain specifications such as costs or operating radius. The objective is to select cranes and determine their locations such that each demand area is connected to its supply area at minimum cost. We detail the problem setting and show how to obtain a discrete set of candidate locations for each crane type without losing optimality. This discretization allows us to reduce our problem to the classic set cover problem. Despite its NP-hardness, we achieve good results employing a standard solver and a greedy heuristic, respectively.Keywords: positioning, selection, standard solver, tower cranes
Procedia PDF Downloads 3742398 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features
Authors: Rabab M. Ramadan, Elaraby A. Elgallad
Abstract:
With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.Keywords: iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, the Scale Invariant Feature Transform (SIFT)
Procedia PDF Downloads 2352397 Microwave-Assisted Chemical Pre-Treatment of Waste Sorghum Leaves: Process Optimization and Development of an Intelligent Model for Determination of Volatile Compound Fractions
Authors: Daneal Rorke, Gueguim Kana
Abstract:
The shift towards renewable energy sources for biofuel production has received increasing attention. However, the use and pre-treatment of lignocellulosic material are inundated with the generation of fermentation inhibitors which severely impact the feasibility of bioprocesses. This study reports the profiling of all volatile compounds generated during microwave assisted chemical pre-treatment of sorghum leaves. Furthermore, the optimization of reducing sugar (RS) from microwave assisted acid pre-treatment of sorghum leaves was assessed and gave a coefficient of determination (R2) of 0.76, producing an optimal RS yield of 2.74 g FS/g substrate. The development of an intelligent model to predict volatile compound fractions gave R2 values of up to 0.93 for 21 volatile compounds. Sensitivity analysis revealed that furfural and phenol exhibited high sensitivity to acid concentration, alkali concentration and S:L ratio, while phenol showed high sensitivity to microwave duration and intensity as well. These findings illustrate the potential of using an intelligent model to predict the volatile compound fraction profile of compounds generated during pre-treatment of sorghum leaves in order to establish a more robust and efficient pre-treatment regime for biofuel production.Keywords: artificial neural networks, fermentation inhibitors, lignocellulosic pre-treatment, sorghum leaves
Procedia PDF Downloads 2482396 Simulation of Performance and Layout Optimization of Solar Collectors with AVR Microcontroller to Achieve Desired Conditions
Authors: Mohsen Azarmjoo, Navid Sharifi, Zahra Alikhani Koopaei
Abstract:
This article aims to conserve energy and optimize the performance of solar water heaters using modern modeling systems. In this study, a large-scale solar water heater is modeled using an AVR microcontroller, which is a digital processor from the AVR microcontroller family. This mechatronic system will be used to analyze the performance and design of solar collectors, with the ultimate goal of improving the efficiency of the system being used. The findings of this research provide insights into optimizing the performance of solar water heaters. By manipulating the arrangement of solar panels and controlling the water flow through them using the AVR microcontroller, researchers can identify the optimal configurations and operational protocols to achieve the desired temperature and flow conditions. These findings can contribute to the development of more efficient and sustainable heating and cooling systems. This article investigates the optimization of solar water heater performance. It examines the impact of solar panel layout on system efficiency and explores methods of controlling water flow to achieve the desired temperature and flow conditions. The results of this research contribute to the development of more sustainable heating and cooling systems that rely on renewable energy sources.Keywords: energy conservation, solar water heaters, solar cooling, simulation, mechatronics
Procedia PDF Downloads 842395 Multi-Scaled Non-Local Means Filter for Medical Images Denoising: Empirical Mode Decomposition vs. Wavelet Transform
Authors: Hana Rabbouch
Abstract:
In recent years, there has been considerable growth of denoising techniques mainly devoted to medical imaging. This important evolution is not only due to the progress of computing techniques, but also to the emergence of multi-resolution analysis (MRA) on both mathematical and algorithmic bases. In this paper, a comparative study is conducted between the two best-known MRA-based decomposition techniques: the Empirical Mode Decomposition (EMD) and the Discrete Wavelet Transform (DWT). The comparison is carried out in a framework of multi-scale denoising, where a Non-Local Means (NLM) filter is performed scale-by-scale to a sample of benchmark medical images. The results prove the effectiveness of the multiscaled denoising, especially when the NLM filtering is coupled with the EMD.Keywords: medical imaging, non local means, denoising, multiscaled analysis, empirical mode decomposition, wavelets
Procedia PDF Downloads 1412394 The Importance of Downstream Supply Chain in Supply Chain Risk Management: Multi-Objective Optimization
Authors: Zohreh Khojasteh-Ghamari, Takashi Irohara
Abstract:
One of the efficient ways in supply chain risk management is avoiding the interruption in Supply Chain (SC) before it occurs. Although the majority of the organizations focus on their first-tier suppliers to avoid risk in the SC, studies show that in only 60 percent of the disruption cases the reason is first tier suppliers. In the 40 percent of the SC disruptions, the reason is downstream SC, which is the second tier and lower. Due to the increasing complexity and interrelation of modern supply chains, the SC elements have become difficult to trace. Moreover, studies show that there is a vital need to better understand the integration of risk and visibility, especially in the context of multiple objectives. In this study, we propose a multi-objective programming model to avoid disruption in SC. The objective of this study is evaluating the effect of downstream SCV on managing supply chain risk. We propose a multi-objective mathematical programming model with the objective functions of minimizing the total cost and maximizing the downstream supply chain visibility (SCV). The decision variable is supplier selection. We assume there are several manufacturers and several candidate suppliers. For each manufacturer, our model proposes the best suppliers with the lowest cost and maximum visibility in downstream supply chain. We examine the applicability of the model by numerical examples. We also define several scenarios for datasets and observe the tendency. The results show that minimum visibility in downstream SC is needed to have a safe SC network.Keywords: downstream supply chain, optimization, supply chain risk, supply chain visibility
Procedia PDF Downloads 2442393 Experimental Design for Formulation Optimization of Nanoparticle of Cilnidipine
Authors: Arti Bagada, Kantilal Vadalia, Mihir Raval
Abstract:
Cilnidipine is practically insoluble in water which results in its insufficient oral bioavailability. The purpose of the present investigation was to formulate cilnidipine nanoparticles by nanoprecipitation method to increase the aqueous solubility and dissolution rate and hence bioavailability by utilizing various experimental statistical design modules. Experimental design were used to investigate specific effects of independent variables during preparation cilnidipine nanoparticles and corresponding responses in optimizing the formulation. Plackett Burman design for independent variables was successfully employed for optimization of nanoparticles of cilnidipine. The influence of independent variables studied were drug concentration, solvent to antisolvent ratio, polymer concentration, stabilizer concentration and stirring speed. The dependent variables namely average particle size, polydispersity index, zeta potential value and saturation solubility of the formulated nanoparticles of cilnidipine. The experiments were carried out according to 13 runs involving 5 independent variables (higher and lower levels) employing Plackett-Burman design. The cilnidipine nanoparticles were characterized by average particle size, polydispersity index value, zeta potential value and saturation solubility and it results were 149 nm, 0.314, 43.24 and 0.0379 mg/ml, respectively. The experimental results were good correlated with predicted data analysed by Plackett-Burman statistical method.Keywords: dissolution enhancement, nanoparticles, Plackett-Burman design, nanoprecipitation
Procedia PDF Downloads 1592392 Numerical Simulation on Two Components Particles Flow in Fluidized Bed
Authors: Wang Heng, Zhong Zhaoping, Guo Feihong, Wang Jia, Wang Xiaoyi
Abstract:
Flow of gas and particles in fluidized beds is complex and chaotic, which is difficult to measure and analyze by experiments. Some bed materials with bad fluidized performance always fluidize with fluidized medium. The material and the fluidized medium are different in many properties such as density, size and shape. These factors make the dynamic process more complex and the experiment research more limited. Numerical simulation is an efficient way to describe the process of gas-solid flow in fluidized bed. One of the most popular numerical simulation methods is CFD-DEM, i.e., computational fluid dynamics-discrete element method. The shapes of particles are always simplified as sphere in most researches. Although sphere-shaped particles make the calculation of particle uncomplicated, the effects of different shapes are disregarded. However, in practical applications, the two-component systems in fluidized bed also contain sphere particles and non-sphere particles. Therefore, it is needed to study the two component flow of sphere particles and non-sphere particles. In this paper, the flows of mixing were simulated as the flow of molding biomass particles and quartz in fluidized bad. The integrated model was built on an Eulerian–Lagrangian approach which was improved to suit the non-sphere particles. The constructed methods of cylinder-shaped particles were different when it came to different numerical methods. Each cylinder-shaped particle was constructed as an agglomerate of fictitious small particles in CFD part, which means the small fictitious particles gathered but not combined with each other. The diameter of a fictitious particle d_fic and its solid volume fraction inside a cylinder-shaped particle α_fic, which is called the fictitious volume fraction, are introduced to modify the drag coefficient β by introducing the volume fraction of the cylinder-shaped particles α_cld and sphere-shaped particles α_sph. In a computational cell, the void ε, can be expressed as ε=1-〖α_cld α〗_fic-α_sph. The Ergun equation and the Wen and Yu equation were used to calculate β. While in DEM method, cylinder-shaped particles were built by multi-sphere method, in which small sphere element merged with each other. Soft sphere model was using to get the connect force between particles. The total connect force of cylinder-shaped particle was calculated as the sum of the small sphere particles’ forces. The model (size=1×0.15×0.032 mm3) contained 420000 sphere-shaped particles (diameter=0.8 mm, density=1350 kg/m3) and 60 cylinder-shaped particles (diameter=10 mm, length=10 mm, density=2650 kg/m3). Each cylinder-shaped particle was constructed by 2072 small sphere-shaped particles (d=0.8 mm) in CFD mesh and 768 sphere-shaped particles (d=3 mm) in DEM mesh. The length of CFD and DEM cells are 1 mm and 2 mm. Superficial gas velocity was changed in different models as 1.0 m/s, 1.5 m/s, 2.0m/s. The results of simulation were compared with the experimental results. The movements of particles were regularly as fountain. The effect of superficial gas velocity on cylinder-shaped particles was stronger than that of sphere-shaped particles. The result proved this present work provided a effective approach to simulation the flow of two component particles.Keywords: computational fluid dynamics, discrete element method, fluidized bed, multiphase flow
Procedia PDF Downloads 3262391 Design of a Cooperative Neural Network, Particle Swarm Optimization (PSO) and Fuzzy Based Tracking Control for a Tilt Rotor Unmanned Aerial Vehicle
Authors: Mostafa Mjahed
Abstract:
Tilt Rotor UAVs (Unmanned Aerial Vehicles) are naturally unstable and difficult to maneuver. The purpose of this paper is to design controllers for the stabilization and trajectory tracking of this type of UAV. To this end, artificial intelligence methods have been exploited. First, the dynamics of this UAV was modeled using the Lagrange-Euler method. The conventional method based on Proportional, Integral and Derivative (PID) control was applied by decoupling the different flight modes. To improve stability and trajectory tracking of the Tilt Rotor, the fuzzy approach and the technique of multilayer neural networks (NN) has been used. Thus, Fuzzy Proportional Integral and Derivative (FPID) and Neural Network-based Proportional Integral and Derivative controllers (NNPID) have been developed. The meta-heuristic approach based on Particle Swarm Optimization (PSO) method allowed adjusting the setting parameters of NNPID controller, giving us an improved NNPID-PSO controller. Simulation results under the Matlab environment show the efficiency of the approaches adopted. Besides, the Tilt Rotor UAV has become stable and follows different types of trajectories with acceptable precision. The Fuzzy, NN and NN-PSO-based approaches demonstrated their robustness because the presence of the disturbances did not alter the stability or the trajectory tracking of the Tilt Rotor UAV.Keywords: neural network, fuzzy logic, PSO, PID, trajectory tracking, tilt-rotor UAV
Procedia PDF Downloads 1202390 Patient Scheduling Improvement in a Cancer Treatment Clinic Using Optimization Techniques
Authors: Maryam Haghi, Ivan Contreras, Nadia Bhuiyan
Abstract:
Chemotherapy is one of the most popular and effective cancer treatments offered to patients in outpatient oncology centers. In such clinics, patients first consult with an oncologist and the oncologist may prescribe a chemotherapy treatment plan for the patient based on the blood test results and the examination of the health status. Then, when the plan is determined, a set of chemotherapy and consultation appointments should be scheduled for the patient. In this work, a comprehensive mathematical formulation for planning and scheduling different types of chemotherapy patients over a planning horizon considering blood test, consultation, pharmacy and treatment stages has been proposed. To be more realistic and to provide an applicable model, this study is focused on a case study related to a major outpatient cancer treatment clinic in Montreal, Canada. Comparing the results of the proposed model with the current practice of the clinic under study shows significant improvements regarding different performance measures. These major improvements in the patients’ schedules reveal that using optimization techniques in planning and scheduling of patients in such highly demanded cancer treatment clinics is an essential step to provide a good coordination between different involved stages which ultimately increases the efficiency of the entire system and promotes the staff and patients' satisfaction.Keywords: chemotherapy patients scheduling, integer programming, integrated scheduling, staff balancing
Procedia PDF Downloads 1752389 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore
Authors: Ronal Muresano, Andrea Pagano
Abstract:
Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.Keywords: algorithm optimization, bank failures, OpenMP, parallel techniques, statistical tool
Procedia PDF Downloads 3692388 Optimal Geothermal Borehole Design Guided By Dynamic Modeling
Authors: Hongshan Guo
Abstract:
Ground-source heat pumps provide stable and reliable heating and cooling when designed properly. The confounding effect of the borehole depth for a GSHP system, however, is rarely taken into account for any optimization: the determination of the borehole depth usually comes prior to the selection of corresponding system components and thereafter any optimization of the GSHP system. The depth of the borehole is important to any GSHP system because the shallower the borehole, the larger the fluctuation of temperature of the near-borehole soil temperature. This could lead to fluctuations of the coefficient of performance (COP) for the GSHP system in the long term when the heating/cooling demand is large. Yet the deeper the boreholes are drilled, the more the drilling cost and the operational expenses for the circulation. A controller that reads different building load profiles, optimizing for the smallest costs and temperature fluctuation at the borehole wall, eventually providing borehole depth as the output is developed. Due to the nature of the nonlinear dynamic nature of the GSHP system, it was found that between conventional optimal controller problem and model predictive control problem, the latter was found to be more feasible due to a possible history of both the trajectory during the iteration as well as the final output could be computed and compared against. Aside from a few scenarios of different weighting factors, the resulting system costs were verified with literature and reports and were found to be relatively accurate, while the temperature fluctuation at the borehole wall was also found to be within acceptable range. It was therefore determined that the MPC is adequate to optimize for the investment as well as the system performance for various outputs.Keywords: geothermal borehole, MPC, dynamic modeling, simulation
Procedia PDF Downloads 2872387 Multi Objective Simultaneous Assembly Line Balancing and Buffer Sizing
Authors: Saif Ullah, Guan Zailin, Xu Xianhao, He Zongdong, Wang Baoxi
Abstract:
Assembly line balancing problem is aimed to divide the tasks among the stations in assembly lines and optimize some objectives. In assembly lines the workload on stations is different from each other due to different tasks times and the difference in workloads between stations can cause blockage or starvation in some stations in assembly lines. Buffers are used to store the semi-finished parts between the stations and can help to smooth the assembly production. The assembly line balancing and buffer sizing problem can affect the throughput of the assembly lines. Assembly line balancing and buffer sizing problems have been studied separately in literature and due to their collective contribution in throughput rate of assembly lines, balancing and buffer sizing problem are desired to study simultaneously and therefore they are considered concurrently in current research. Current research is aimed to maximize throughput, minimize total size of buffers in assembly line and minimize workload variations in assembly line simultaneously. A multi objective optimization objective is designed which can give better Pareto solutions from the Pareto front and a simple example problem is solved for assembly line balancing and buffer sizing simultaneously. Current research is significant for assembly line balancing research and it can be significant to introduce optimization approaches which can optimize current multi objective problem in future.Keywords: assembly line balancing, buffer sizing, Pareto solutions
Procedia PDF Downloads 4912386 Approximate Spring Balancing for Swimming Pool Lift Mechanism to Reduce Actuator Torque
Authors: Apurva Patil, Sujatha Srinivasan
Abstract:
Reducing actuator loads is important for applications in which human effort is required for actuation. The potential benefit of applying spring balancing to rehabilitation devices which work against gravity on a nonhorizontal plane is well recognized, but practical applications have been elusive. Although existing methods provide exact spring balance, they require additional masses or auxiliary links, or all the springs used originate from the ground, which makes the resulting device bulky and space-inefficient. This paper uses a method of static balancing of mechanisms with conservative loads such as gravity and spring loads using non-zero-free-length springs and no auxiliary links. Application of this method to a manually operated swimming pool lift mechanism which lowers and raises the physically challenged users into or out of the swimming pool is presented here. Various possible configurations using extension and compression springs as well as gas spring in the mechanism are compared. This work involves approximate spring balancing of the mechanism using minimization of potential energy variance. It uses the approach of flattening the potential energy distribution over the workspace and fuses it with numerical optimization. The results show the considerable reduction in actuator torque requirement with practical spring design and arrangement. Although the method provides only an approximate balancing, it is versatile, flexible in choosing appropriate control variables that are relevant to the design problem and easy to implement. The true potential of this technique lies in the fact that it uses a very simple optimization to find the spring constant, free length of the spring and the optimal attachment points subject to the optimization constraints. Also, it uses physically realizable non-zero-free-length springs directly, thereby reducing the complexity involved in simulating zero-free-length springs from non-zero-free-length springs. This method allows springs to be attached inside the mechanism, which makes the implementation of spring balancing practical. Because auxiliary linkages can be avoided, the resultant swimming pool lift mechanism is compact. The cost benefits and reduced complexity can be significant advantages in the development of this user-actuated swimming pool lift for developing countries.Keywords: gas spring, rehabilitation device, spring balancing, swimming pool lift
Procedia PDF Downloads 2412385 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition
Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can
Abstract:
To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning
Procedia PDF Downloads 852384 Developing a Comprehensive Green Building Rating System Tailored for Nigeria: Analyzing International Sustainable Rating Systems to Create Environmentally Responsible Standards for the Nigerian Construction Industry and Built Environment
Authors: Azeez Balogun
Abstract:
Inexperienced building score practices are continually evolving and vary across areas. Yet, a few middle ideas stay steady, such as website selection, design, energy efficiency, water and material conservation, indoor environmental great, operational optimization, and waste discount. The essence of green building lies inside the optimization of 1 or more of those standards. This paper conducts a comparative analysis of 7 extensively recognized sustainable score structures—BREEAM, CASBEE, green GLOBES, inexperienced superstar, HK-BEAM, IGBC green homes, and LEED—based totally on the perceptions and opinions of stakeholders in Nigeria certified in green constructing rating systems. The purpose is to pick out and adopt an appropriate green building rating device for Nigeria. Numerous components of those systems had been tested to determine the high-quality health of the Nigerian built environment. The findings imply that LEED, the important machine within the USA and Canada, is the most suitable for Nigeria due to its sturdy basis, extensive funding, and confirmed blessings. LEED obtained the highest rating of eighty out of one hundred points on this assessment.Keywords: structure, built surroundings, inexperienced building score gadget, Nigeria Inexperienced Constructing Council, sustainability
Procedia PDF Downloads 282383 Application of Bacteriophage and Essential Oil to Enhance Photocatalytic Efficiency
Authors: Myriam Ben Said, Dhekra Trabelsi, Faouzi Achouri, Marwa Ben Saad, Latifa Bousselmi, Ahmed Ghrabi
Abstract:
This present study suggests the use of biological and natural bactericide, cheap, safe to handle, natural, environmentally benign agents to enhance the conventional wastewater treatment process. In the same sense, to highlight the enhancement of wastewater photocatalytic treatability, we were used virulent bacteriophage(s) and essential oils (EOs). The pre-phago-treatment of wastewater with lytic phage(s), leads to a decrease in bacterial density and, consequently, limits the establishment of intercellular communication (QS), thus preventing biofilm formation and inhibiting the expression of other virulence factors after photocatalysis. Moreover, to increase the photocatalytic efficiency, we were added to the secondary treated wastewater 1/1000 (w/v) of EO of thyme (T. vulgaris). This EO showed in vitro an anti-biofilm activity through the inhibition of plonctonic cell mobility and their attachment on an inert surface and also the deterioration of the sessile structure. The presence of photoactivatable molecules (photosensitizes) in this type of oil allows the optimization of photocatalytic efficiency without hazards relayed to dyes and chemicals reagent. The use of ‘biological and natural tools’ in combination with usual water treatment process can be considered as a safety procedure to reduce and/or to prevent the recontamination of treated water and also to prevent the re-expression of virulent factors by pathogenic bacteria such as biofilm formation with friendly processes.Keywords: biofilm, essential oil, optimization, phage, photocatalysis, wastewater
Procedia PDF Downloads 1542382 Application of Statistical Linearized Models for Investigations of Digital Dynamic Pulse-Frequency Control Systems
Authors: B. H. Aitchanov, Sh. K. Aitchanova, O. A. Baimuratov
Abstract:
This paper is focused on dynamic pulse-frequency modulation (DPFM) control systems. Currently, the control law based on DPFM control signals is widely used in direct digital control subsystems introduced in the automated control systems of technological processes. Statistical analysis of automatic control systems is reduced to its construction of functional relationships between the statistical characteristics of the errors processes and input processes. Structural and dynamic Volterra models of digital pulse-frequency control systems can be used to develop methods for generating the dependencies, differing accuracy, requiring the amount of information about the statistical characteristics of input processes and computing labor intensity of their use.Keywords: digital dynamic pulse-frequency control systems, dynamic pulse-frequency modulation, control object, discrete filter, impulse device, microcontroller
Procedia PDF Downloads 495