Search results for: Risk acceptanceand Multi-objective optimization.
1869 A Review on the Usage of Ceramic Wastes in Concrete Production
Authors: O. Zimbili, W. Salim, M. Ndambuki
Abstract:
Construction and Demolition (C&D) wastes contribute the highest percentage of wastes worldwide (75%). Furthermore, ceramic materials contribute the highest percentage of wastes within the C&D wastes (54%). The current option for disposal of ceramic wastes is landfill. This is due to unavailability of standards, avoidance of risk, lack of knowledge and experience in using ceramic wastes in construction. The ability of ceramic wastes to act as a pozzolanic material in the production of cement has been effectively explored. The results proved that temperatures used in the manufacturing of these tiles (about 900⁰C) are sufficient to activate pozzolanic properties of clay. They also showed that, after optimization (11-14% substitution); the cement blend performs better, with no morphological difference between the cement blended with ceramic waste, and that blended with other pozzolanic materials. Sanitary ware and electrical insulator porcelain wastes are some wastes investigated for usage as aggregates in concrete production. When optimized, both produced good results, better than when natural aggregates are used. However, the research on ceramic wastes as partial substitute for fine aggregates or cement has not been overly exploited as the other areas. This review has been concluded with focus on investigating whether ceramic wall tile wastes used as partial substitute for cement and fine aggregates could prove to be beneficial since the two materials are the most high-priced during concrete production.
Keywords: Blended, morphological, pozzolanic properties, waste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87911868 A Multi-Objective Evolutionary Algorithm of Neural Network for Medical Diseases Problems
Authors: Sultan Noman Qasem
Abstract:
This paper presents an evolutionary algorithm for solving multi-objective optimization problems-based artificial neural network (ANN). The multi-objective evolutionary algorithm used in this study is genetic algorithm while ANN used is radial basis function network (RBFN). The proposed algorithm named memetic elitist Pareto non-dominated sorting genetic algorithm-based RBFN (MEPGAN). The proposed algorithm is implemented on medical diseases problems. The experimental results indicate that the proposed algorithm is viable, and provides an effective means to design multi-objective RBFNs with good generalization capability and compact network structure. This study shows that MEPGAN generates RBFNs coming with an appropriate balance between accuracy and simplicity, comparing to the other algorithms found in literature.
Keywords: Radial basis function network, Hybrid learning, Multi-objective optimization, Genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22531867 Optimization of Diverter Box Configuration in a V94.2 Gas Turbine Exhaust System using Numerical Simulation
Authors: A. Mohajer, A. Noroozi, S. Norouzi
Abstract:
The bypass exhaust system of a 160 MW combined cycle has been modeled and analyzed using numerical simulation in 2D prospective. Analysis was carried out using the commercial numerical simulation software, FLUENT 6.2. All inputs were based on the technical data gathered from working conditions of a Siemens V94.2 gas turbine, installed in the Yazd power plant. This paper deals with reduction of pressure drop in bypass exhaust system using turning vanes mounted in diverter box in order to alleviate turbulent energy dissipation rate above diverter box. The geometry of such turning vanes has been optimized based on the flow pattern at diverter box inlet. The results show that the use of optimized turning vanes in diverter box can improve the flow pattern and eliminate vortices around sharp edges just before the silencer. Furthermore, this optimization could decrease the pressure drop in bypass exhaust system and leads to higher plant efficiency.
Keywords: Numerical simulation, Diverter box, Turning vanes, Exhaust system
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28041866 Precombining Adaptive LMMSE Detection for DS-CDMA Systems in Time Varying Channels: Non Blind and Blind Approaches
Authors: M. D. Kokate, T. R. Sontakke, P. W. Wani
Abstract:
This paper deals with an adaptive multiuser detector for direct sequence code division multiple-access (DS-CDMA) systems. A modified receiver, precombinig LMMSE is considered under time varying channel environment. Detector updating is performed with two criterions, mean square estimation (MSE) and MOE optimization technique. The adaptive implementation issues of these two schemes are quite different. MSE criterion updates the filter weights by minimizing error between data vector and adaptive vector. MOE criterion together with canonical representation of the detector results in a constrained optimization problem. Even though the canonical representation is very complicated under time varying channels, it is analyzed with assumption of average power profile of multipath replicas of user of interest. The performance of both schemes is studied for practical SNR conditions. Results show that for poor SNR, MSE precombining LMMSE is better than the blind precombining LMMSE but for greater SNR, MOE scheme outperforms with better result.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14951865 Testing Database of Information System using Conceptual Modeling
Authors: Bogdan Walek, Cyril Klimes
Abstract:
This paper focuses on testing database of existing information system. At the beginning we describe the basic problems of implemented databases, such as data redundancy, poor design of database logical structure or inappropriate data types in columns of database tables. These problems are often the result of incorrect understanding of the primary requirements for a database of an information system. Then we propose an algorithm to compare the conceptual model created from vague requirements for a database with a conceptual model reconstructed from implemented database. An algorithm also suggests steps leading to optimization of implemented database. The proposed algorithm is verified by an implemented prototype. The paper also describes a fuzzy system which works with the vague requirements for a database of an information system, procedure for creating conceptual from vague requirements and an algorithm for reconstructing a conceptual model from implemented database.Keywords: testing, database, relational database, information system, conceptual model, fuzzy, uncertain information, database testing, reconstruction, requirements, optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14451864 Optimum Shape and Design of Cooling Towers
Authors: A. M. El Ansary, A. A. El Damatty, A. O. Nassef
Abstract:
The aim of the current study is to develop a numerical tool that is capable of achieving an optimum shape and design of hyperbolic cooling towers based on coupling a non-linear finite element model developed in-house and a genetic algorithm optimization technique. The objective function is set to be the minimum weight of the tower. The geometric modeling of the tower is represented by means of B-spline curves. The finite element method is applied to model the elastic buckling behaviour of a tower subjected to wind pressure and dead load. The study is divided into two main parts. The first part investigates the optimum shape of the tower corresponding to minimum weight assuming constant thickness. The study is extended in the second part by introducing the shell thickness as one of the design variables in order to achieve an optimum shape and design. Design, functionality and practicality constraints are applied.Keywords: B-splines, Cooling towers, Finite element, Genetic algorithm, Optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32581863 Dynamic Measurement System Modeling with Machine Learning Algorithms
Authors: Changqiao Wu, Guoqing Ding, Xin Chen
Abstract:
In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.Keywords: Dynamic system modeling, neural network, normal equation, second order gradient descent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7811862 Comparison between Minimum Direct and Indirect Jerks of Linear Dynamic Systems
Authors: Tawiwat Veeraklaew, Nathasit Phathana-im, Songkit Heama
Abstract:
Both the minimum energy consumption and smoothness, which is quantified as a function of jerk, are generally needed in many dynamic systems such as the automobile and the pick-and-place robot manipulator that handles fragile equipments. Nevertheless, many researchers come up with either solely concerning on the minimum energy consumption or minimum jerk trajectory. This research paper proposes a simple yet very interesting relationship between the minimum direct and indirect jerks approaches in designing the time-dependent system yielding an alternative optimal solution. Extremal solutions for the cost functions of direct and indirect jerks are found using the dynamic optimization methods together with the numerical approximation. This is to allow us to simulate and compare visually and statistically the time history of control inputs employed by minimum direct and indirect jerk designs. By considering minimum indirect jerk problem, the numerical solution becomes much easier and yields to the similar results as minimum direct jerk problem.Keywords: Optimization, Dynamic, Linear Systems, Jerks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12581861 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency
Authors: Fanqiang Kong, Chending Bian
Abstract:
In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.Keywords: Hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7701860 Combining Minimum Energy and Minimum Direct Jerk of Linear Dynamic Systems
Authors: V. Tawiwat, P. Jumnong
Abstract:
Both the minimum energy consumption and smoothness, which is quantified as a function of jerk, are generally needed in many dynamic systems such as the automobile and the pick-and-place robot manipulator that handles fragile equipments. Nevertheless, many researchers come up with either solely concerning on the minimum energy consumption or minimum jerk trajectory. This research paper proposes a simple yet very interesting when combining the minimum energy and jerk of indirect jerks approaches in designing the time-dependent system yielding an alternative optimal solution. Extremal solutions for the cost functions of the minimum energy, the minimum jerk and combining them together are found using the dynamic optimization methods together with the numerical approximation. This is to allow us to simulate and compare visually and statistically the time history of state inputs employed by combining minimum energy and jerk designs. The numerical solution of minimum direct jerk and energy problem are exactly the same solution; however, the solutions from problem of minimum energy yield the similar solution especially in term of tendency.Keywords: Optimization, Dynamic, Linear Systems, Jerks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15721859 Optimization of Transmission Lines Loading in TNEP Using Decimal Codification Based GA
Authors: H. Shayeghi, M. Mahdavi
Abstract:
Transmission network expansion planning (TNEP) is a basic part of power system planning that determines where, when and how many new transmission lines should be added to the network. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, lines adequacy rate has not been considered at the end of planning horizon, i.e., expanded network misses adequacy after some times and needs to be expanded again. In this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using genetic algorithm. Expanded network will possess a maximum adequacy to provide load demand and also the transmission lines overloaded later. Finally, adequacy index could be defined and used to compare some designs that have different investment costs and adequacy rates. In this paper, the proposed idea has been tested on the Garvers network. The results show that the network will possess maximum efficiency economically.
Keywords: Adequacy Optimization, Transmission Expansion Planning, DCGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18111858 Efficient Frontier - Comparing Different Volatility Estimators
Authors: Tea Poklepović, Zdravka Aljinović, Mario Matković
Abstract:
Modern Portfolio Theory (MPT) according to Markowitz states that investors form mean-variance efficient portfolios which maximizes their utility. Markowitz proposed the standard deviation as a simple measure for portfolio risk and the lower semi-variance as the only risk measure of interest to rational investors. This paper uses a third volatility estimator based on intraday data and compares three efficient frontiers on the Croatian Stock Market. The results show that range-based volatility estimator outperforms both mean-variance and lower semi-variance model.
Keywords: Variance, lower semi-variance, range-based volatility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25781857 Comparison of Mamdani and Sugeno Fuzzy Interference Systems for the Breast Cancer Risk
Authors: Alshalaa A. Shleeg, Issmail M. Ellabib
Abstract:
Breast cancer is a major health burden worldwide being a major cause of death amongst women. In this paper, Fuzzy Inference Systems (FIS) are developed for the evaluation of breast cancer risk using Mamdani-type and Sugeno-type models. The paper outlines the basic difference between Mamdani-type FIS and Sugeno-type FIS. The results demonstrated the performance comparison of the two systems and the advantages of using Sugeno- type over Mamdani-type.
Keywords: Breast cancer diagnosis, Fuzzy Inference System (FIS), Fuzzy Logic, fuzzy intelligent technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71711856 Artificial Neural Network Approach for Inventory Management Problem
Authors: Govind Shay Sharma, Randhir Singh Baghel
Abstract:
The stock management of raw materials and finished goods is a significant issue for industries in fulfilling customer demand. Optimization of inventory strategies is crucial to enhancing customer service, reducing lead times and costs, and meeting market demand. This paper suggests finding an approach to predict the optimum stock level by utilizing past stocks and forecasting the required quantities. In this paper, we utilized Artificial Neural Network (ANN) to determine the optimal value. The objective of this paper is to discuss the optimized ANN that can find the best solution for the inventory model. In the context of the paper, we mentioned that the k-means algorithm is employed to create homogeneous groups of items. These groups likely exhibit similar characteristics or attributes that make them suitable for being managed using uniform inventory control policies. The paper proposes a method that uses the neural fit algorithm to control the cost of inventory.
Keywords: Artificial Neural Network, inventory management, optimization, distributor center.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711855 Study of Chest Pain and its Risk Factors in Over 30 Year-Old Individuals
Authors: S. Dabiran
Abstract:
Chest pain is one of the most prevalent complaints among adults that cause the people to attend to medical centers. The aim was to determine the prevalence and risk factors of chest pain among over 30 years old people in Tehran. In this cross-sectional study, 787 adults took part from Apr 2005 until Apr 2006. The sampling method was random cluster sampling and there were 25 clusters. In each cluster, interviews were performed with 32 over 30 years old, people lived in those houses. In cases with chest pain, extra questions asked. The prevalence of CP was 9% (71 cases). Of them 21 cases (6.5%) were in 41-60 year age ranges and the remainders were over 61 year old. 19 cases (26.8%) mentioned CP in resting state and all of the cases had exertion onset CP. The CP duration was 10 minutes or less in all of the cases and in most of them (84.5%), the location of pain mentioned left anterior part of chest, left anterior part of sternum and or left arm. There was positive history of myocardial infarction in 12 cases (17%). There was significant relation between CP and age, sex and between history of myocardial infarction and marital state of study people. Our results are similar to other studies- results in most parts, however it is necessary to perform supplementary tests and follow up studies to differentiate between cardiac and non-cardiac CP exactly.Keywords: Chest pain, myocardial infarction, risk factor, prevalence
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14651854 IPSO Based UPFC Robust Output Feedback Controllers for Damping of Low Frequency Oscillations
Authors: A. Safari, H. Shayeghi, H. A. Shayanfar
Abstract:
On the basis of the linearized Phillips-Herffron model of a single-machine power system, a novel method for designing unified power flow controller (UPFC) based output feedback controller is presented. The design problem of output feedback controller for UPFC is formulated as an optimization problem according to with the time domain-based objective function which is solved by iteration particle swarm optimization (IPSO) that has a strong ability to find the most optimistic results. To ensure the robustness of the proposed damping controller, the design process takes into account a wide range of operating conditions and system configurations. The simulation results prove the effectiveness and robustness of the proposed method in terms of a high performance power system. The simulation study shows that the designed controller by Iteration PSO performs better than Classical PSO in finding the solution.
Keywords: UPFC, IPSO, output feedback Controller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14341853 Optimal Compensation of Reactive Power in the Restructured Distribution Network
Authors: Atefeh Pourshafie, Mohsen. Saniei, S. S. Mortazavi, A. Saeedian
Abstract:
In this paper optimal capacitor placement problem has been formulated in a restructured distribution network. In this scenario the distribution network operator can consider reactive energy also as a service that can be sold to transmission system. Thus search for optimal location, size and number of capacitor banks with the objective of loss reduction, maximum income from selling reactive energy to transmission system and return on investment for capacitors, has been performed. Results is influenced with economic value of reactive energy, therefore problem has been solved for various amounts of it. The implemented optimization technique is genetic algorithm. For any value of reactive power economic value, when reverse of investment index increase and change from zero or negative values to positive values, the threshold value of selling reactive power has been obtained. This increasing price of economic parameter is reasonable until the network losses is less than loss before compensation.Keywords: capacitor placement, deregulated electric market, distribution network optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21231852 Evaluation of Risks in New Product Innovation
Authors: Emre Alptekin, Damla Yalçınyiğit, Gülfem Alptekin
Abstract:
In highly competitive environments, a growing number of companies must regularly launch new products speedily and successfully. A company-s success is based on the systematic, conscious product designing method which meets the market requirements and takes risks as well as resources into consideration. Research has found that developing and launching new products are inherently risky endeavors. Hence in this research, we aim at introducing a risk evaluation framework for the new product innovation process. Our framework is based on the fuzzy analytical hierarchy process (FAHP) methodology. We have applied all the stages of the framework on the risk evaluation process of a pharmaceuticals company.Keywords: Evaluation, risks, product innovation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14931851 Topological Sensitivity Analysis for Reconstruction of the Inverse Source Problem from Boundary Measurement
Authors: Maatoug Hassine, Mourad Hrizi
Abstract:
In this paper, we consider a geometric inverse source problem for the heat equation with Dirichlet and Neumann boundary data. We will reconstruct the exact form of the unknown source term from additional boundary conditions. Our motivation is to detect the location, the size and the shape of source support. We present a one-shot algorithm based on the Kohn-Vogelius formulation and the topological gradient method. The geometric inverse source problem is formulated as a topology optimization one. A topological sensitivity analysis is derived from a source function. Then, we present a non-iterative numerical method for the geometric reconstruction of the source term with unknown support using a level curve of the topological gradient. Finally, we give several examples to show the viability of our presented method.Keywords: Geometric inverse source problem, heat equation, topological sensitivity, topological optimization, Kohn-Vogelius formulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11191850 Retrieval Augmented Generation against the Machine: Merging Human Cyber Security Expertise with Generative AI
Authors: Brennan Lodge
Abstract:
Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLMs is exciting, such models do have their downsides. LLMs cannot easily expand or revise their memory, and they cannot straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.
Keywords: Retrieval Augmented Generation, Governance Risk and Compliance, Cybersecurity, AI-driven Compliance, Risk Management, Generative AI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1241849 Magnitude and Determinants of Overweight and Obesity among High School Adolescents in Addis Ababa, Ethiopia
Authors: Mulugeta Shegaze, Mekitie Wondafrash, Alemayehu A. Alemayehu, Shikur Mohammed, Zewdu Shewangezaw, Mukerem Abdo, Gebresilasea Gendisha
Abstract:
Background: The 2004 World Health Assembly called for specific actions to halt the overweight and obesity epidemic that is currently penetrating urban populations in the developing world. Adolescents require particular attention due to their vulnerability to develop obesity and the fact that adolescent weight tracks strongly into adulthood. However, there is scarcity of information on the modifiable risk factors to be targeted for primary intervention among urban adolescents in Ethiopia. This study was aimed at determining the magnitude and risk factors of overweight and obesity among high school adolescents in Addis Ababa. Methods: An institution-based cross-sectional study was conducted in February and March 2014 on 456 randomly selected adolescents from 20 high schools in Addis Ababa city. Demographic data and other risk factors of overweight and obesity were collected using self-administered structured questionnaire, whereas anthropometric measurements of weight and height were taken using calibrated equipment and standardized techniques. The WHO STEPS instrument for chronic disease risk was applied to assess dietary habit and physical activity. Overweight and obesity status was determined based on BMI-for-age percentiles of WHO 2007 reference population. Results: The prevalence rates of overweight, obesity, and overall overweight/ obesity among high school adolescents in Addis Ababa were 9.7% (95%CI = 6.9-12.4%), 4.2% (95%CI = 2.3-6.0%), and 13.9% (95%CI = 10.6-17.1%), respectively. Overweight/obesity prevalence was highest among female adolescents, in private schools, and in the higher wealth category. In multivariable regression model, being female [AOR(95%CI) = 5.4(2.5,12.1)], being from private school [AOR(95%CI) = 3.0(1.4,6.2)], having >3 regular meals [AOR(95%CI) = 4.0(1.3,13.0)], consumption of sweet foods [AOR(95%CI) = 5.0(2.4,10.3)] and spending >3 hours/day sitting [AOR(95%CI) = 3.5(1.7,7.2)] were found to increase overweight/ obesity risk, whereas high Total Physical Activity level [AOR(95%CI) = 0.21(0.08,0.57)] and better nutrition knowledge [AOR(95%CI) = 0.160.07,0.37)] were found protective. Conclusions: More than one in ten of the high school adolescents were affected by overweight/obesity with dietary habit and physical activity are important modifiable risk factors. Well-tailored nutrition education program targeting lifestyle change should be initiated with more emphasis to female adolescents and students in private schools.Keywords: Adolescents, NCDs, overweight, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25941848 Optimal Distributed Generator Sizing and Placement by Analytical Method and PSO Algorithm Considering Optimal Reactive Power Dispatch
Authors: Kyaw Myo Lin, Pyone Lai Swe, Khine Zin Oo
Abstract:
In this paper, an approach combining analytical method for the distributed generator (DG) sizing and meta-heuristic search for the optimal location of DG has been presented. The optimal size of DG on each bus is estimated by the loss sensitivity factor method while the optimal sites are determined by Particle Swarm Optimization (PSO) based optimal reactive power dispatch for minimizing active power loss. To confirm the proposed approach, it has been tested on IEEE-30 bus test system. The adjustments of operating constraints and voltage profile improvements have also been observed. The obtained results show that the allocation of DGs results in a significant loss reduction with good voltage profiles and the combined approach is competent in keeping the system voltages within the acceptable limits.
Keywords: Analytical approach, distributed generations, optimal size, optimal location, optimal reactive power dispatch, particle swarm optimization algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11731847 The Feedback Control for Distributed Systems
Authors: Kamil Aida-zade, C. Ardil
Abstract:
We study the problem of synthesis of lumped sources control for the objects with distributed parameters on the basis of continuous observation of phase state at given points of object. In the proposed approach the phase state space (phase space) is beforehand somehow partitioned at observable points into given subsets (zones). The synthesizing control actions therewith are taken from the class of piecewise constant functions. The current values of control actions are determined by the subset of phase space that contains the aggregate of current states of object at the observable points (in these states control actions take constant values). In the paper such synthesized control actions are called zone control actions. A technique to obtain optimal values of zone control actions with the use of smooth optimization methods is given. With this aim, the formulas of objective functional gradient in the space of zone control actions are obtained.Keywords: Feedback control, distributed systems, smooth optimization methods, lumped control synthesis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6121846 Identifying and Adopting Latter Instruments Determining the Sustainable Company Competitiveness
Authors: Andrej Miklošík, Petra Horváthová, Štefan Žák
Abstract:
Nowadays companies in all sectors are looking for the sources of competitive advantages. Holistic marketing approach searches for their emergence based on the integration of all components and elements across the organization. Modern marketing sees the sources of competitive advantage in implementing the latest managerial practices, motivation, intelligent project management, knowledge management, collaborative marketing, CSR and, in the recent years, also in the business process optimization. With the use of modern tools including business process management and business process modelling the company can markedly increase its internal efficiency which can lead not only to lowering the costs but to creating the environment for optimal customer care, positive corporate culture and for origination of innovations as well. In the article the authors analyze the recent trend in this area and introduce suggestions to companies to identify and optimize the key processes that have a significant impact of the company´s competitiveness.Keywords: business process optimization, competitive advantage, corporate social responsibility, knowledge management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17371845 Optimization of Transportation Cost of Plaster of Paris Cement
Authors: K. M. Oba
Abstract:
The transportation modelling technique was adopted in the solution of the problem of transportation of Plaster of Paris (POP) cement from three supply locations (construction materials markets) to three demand locations (construction sites) in Port Harcourt. The study was carried out for 40 kg bags of POP cement fully loaded on 600 bags per truck from the three selected construction materials markets in Port Harcourt. The costs of transporting the POP cement were determined and subjected to the North-West Corner, Least Cost, and Vogel’s approximation methods to determine the initial feasible solution. Of the three results, the Least Cost Method turned out to have the lowest cost. Using the Stepping Stone Method, the optimum shipping cost was finally attained after two successive iterations. The optimum shipping cost was calculated to be $1,690 or ₦1,774,500 as of October 2023. As a result of this study, the application of transportation modelling can boost the effective management of the transportation of POP cement in construction projects.
Keywords: Cost of POP cement, management of transportation, optimization of shipping cost, Plaster of Paris, transportation model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2681844 A Fuzzy TOPSIS Based Model for Safety Risk Assessment of Operational Flight Data
Authors: N. Borjalilu, P. Rabiei, A. Enjoo
Abstract:
Flight Data Monitoring (FDM) program assists an operator in aviation industries to identify, quantify, assess and address operational safety risks, in order to improve safety of flight operations. FDM is a powerful tool for an aircraft operator integrated into the operator’s Safety Management System (SMS), allowing to detect, confirm, and assess safety issues and to check the effectiveness of corrective actions, associated with human errors. This article proposes a model for safety risk assessment level of flight data in a different aspect of event focus based on fuzzy set values. It permits to evaluate the operational safety level from the point of view of flight activities. The main advantages of this method are proposed qualitative safety analysis of flight data. This research applies the opinions of the aviation experts through a number of questionnaires Related to flight data in four categories of occurrence that can take place during an accident or an incident such as: Runway Excursions (RE), Controlled Flight Into Terrain (CFIT), Mid-Air Collision (MAC), Loss of Control in Flight (LOC-I). By weighting each one (by F-TOPSIS) and applying it to the number of risks of the event, the safety risk of each related events can be obtained.Keywords: F-TOPSIS, fuzzy set, FDM, flight safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8871843 PID Control Design Based on Genetic Algorithm with Integrator Anti-Windup for Automatic Voltage Regulator and Speed Governor of Brushless Synchronous Generator
Authors: O. S. Ebrahim, M. A. Badr, Kh. H. Gharib, H. K. Temraz
Abstract:
This paper presents a methodology based on genetic algorithm (GA) to tune the parameters of proportional-integral-differential (PID) controllers utilized in the automatic voltage regulator (AVR) and speed governor of a brushless synchronous generator driven by three-stage steam turbine. The parameter tuning is represented as a nonlinear optimization problem solved by GA to minimize the integral of absolute error (IAE). The problem of integral windup due to physical system limitations is solved using simple anti-windup scheme. The obtained controllers are compared to those designed using classical Ziegler-Nichols technique and constrained optimization. Results show distinct superiority of the proposed method.
Keywords: Brushless synchronous generator, Genetic Algorithm, GA, Proportional-Integral-Differential control, PID control, automatic voltage regulator, AVR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2961842 Optimization and Kinetic Study of Gaharu Oil Extraction
Authors: Muhammad Hazwan H., Azlina M.F., Hasfalina C.M., Zurina Z.A., Hishamuddin J
Abstract:
Gaharu that produced by Aquilaria spp. is classified as one of the most valuable forest products traded internationally as it is very resinous, fragrant and highly valuable heartwood. Gaharu has been widely used in aromatheraphy, medicine, perfume and religious practices. This work aimed to determine the factors affecting solid liquid extraction of gaharu oil using hexane as solvent under experimental condition. The kinetics of extraction was assumed and verified based on a second-order mechanism. The effect of three main factors, which were temperature, reaction time and solvent to solid ratio were investigated to achieve maximum oil yield. The optimum condition were found at temperature 65°C, 9 hours reaction time and solvent to solid ratio of 12:1 with 14.5% oil yield. The kinetics experimental data agrees and well fitted with the second order extraction model. The initial extraction rate (h) was 0.0115 gmL-1min-1; the extraction capacity (Cs) was 1.282gmL-1; the second order extraction constant (k) was 0.007 mLg-1min-1 and coefficient of determination, R2 was 0.945.Keywords: Gaharu, solid liquid extraction, optimization, kinetics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32641841 Impact of Liquidity Crunch on Interbank Network
Authors: I. Lucas, N. Schomberg, F-A. Couturier
Abstract:
Most empirical studies have analyzed how liquidity risks faced by individual institutions turn into systemic risk. Recent banking crisis has highlighted the importance of grasping and controlling the systemic risk, and the acceptance by Central Banks to ease their monetary policies for saving default or illiquid banks. This last point shows that banks would pay less attention to liquidity risk which, in turn, can become a new important channel of loss. The financial regulation focuses on the most important and “systemic” banks in the global network. However, to quantify the expected loss associated with liquidity risk, it is worth to analyze sensitivity to this channel for the various elements of the global bank network. A small bank is not considered as potentially systemic; however the interaction of small banks all together can become a systemic element. This paper analyzes the impact of medium and small banks interaction on a set of banks which is considered as the core of the network. The proposed method uses the structure of agent-based model in a two-class environment. In first class, the data from actual balance sheets of 22 large and systemic banks (such as BNP Paribas or Barclays) are collected. In second one, to model a network as closely as possible to actual interbank market, 578 fictitious banks smaller than the ones belonging to first class have been split into two groups of small and medium ones. All banks are active on the European interbank network and have deposit and market activity. A simulation of 12 three month periods representing a midterm time interval three years is projected. In each period, there is a set of behavioral descriptions: repayment of matured loans, liquidation of deposits, income from securities, collection of new deposits, new demands of credit, and securities sale. The last two actions are part of refunding process developed in this paper. To strengthen reliability of proposed model, random parameters dynamics are managed with stochastic equations as rates the variations of which are generated by Vasicek model. The Central Bank is considered as the lender of last resort which allows banks to borrow at REPO rate and some ejection conditions of banks from the system are introduced.
Liquidity crunch due to exogenous crisis is simulated in the first class and the loss impact on other bank classes is analyzed though aggregate values representing the aggregate of loans and/or the aggregate of borrowing between classes. It is mainly shown that the three groups of European interbank network do not have the same response, and that intermediate banks are the most sensitive to liquidity risk.
Keywords: Systemic Risk, Financial Contagion, Liquidity Risk, Interbank Market, Network Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20261840 Health Risk Assessment for Sewer Workers using Bayesian Belief Networks
Authors: Kevin Fong-Rey Liu, Ken Yeh, Cheng-Wu Chen, Han-Hsi Liang
Abstract:
The sanitary sewerage connection rate becomes an important indicator of advanced cities. Following the construction of sanitary sewerages, the maintenance and management systems are required for keeping pipelines and facilities functioning well. These maintenance tasks often require sewer workers to enter the manholes and the pipelines, which are confined spaces short of natural ventilation and full of hazardous substances. Working in sewers could be easily exposed to a risk of adverse health effects. This paper proposes the use of Bayesian belief networks (BBN) as a higher level of noncarcinogenic health risk assessment of sewer workers. On the basis of the epidemiological studies, the actual hospital attendance records and expert experiences, the BBN is capable of capturing the probabilistic relationships between the hazardous substances in sewers and their adverse health effects, and accordingly inferring the morbidity and mortality of the adverse health effects. The provision of the morbidity and mortality rates of the related diseases is more informative and can alleviate the drawbacks of conventional methods.Keywords: Bayesian belief networks, sanitary sewerage, healthrisk assessment, hazard quotient, target organ-specific hazard index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1707