Search results for: MATLAB® mapping toolbox.
64 Wind Power Mapping and NPV of Embedded Generation Systems in Nigeria
Authors: Oluseyi O. Ajayi, Ohiose D. Ohijeagbon, Mercy Ogbonnaya, Ameh Attabo
Abstract:
The study assessed the potential and economic viability of stand-alone wind systems for embedded generation, taking into account its benefits to small off-grid rural communities at 40 meteorological sites in Nigeria. A specific electric load profile was developed to accommodate communities consisting of 200 homes, a school and a community health centre. This load profile was incorporated within the distributed generation analysis producing energy in the MW range, while optimally meeting daily load demand for the rural communities. Twenty-four years (1987 to 2010) of wind speed data at a height of 10m utilized for the study were sourced from the Nigeria Meteorological Department, Oshodi. The HOMER® software optimizing tool was engaged for the feasibility study and design. Each site was suited to 3MW wind turbines in sets of five, thus 15MW was designed for each site. This design configuration was adopted in order to easily compare the distributed generation system amongst the sites to determine their relative economic viability in terms of life cycle cost, as well as levelised cost of producing energy. A net present value was estimated in terms of life cycle cost for 25 of the 40 meteorological sites. On the other hand, the remaining sites yielded a net present cost; meaning the installations at these locations were not economically viable when utilizing the present tariff regime for embedded generation in Nigeria.
Keywords: Wind speed, wind power, distributed generation, cost per kilowatt-hour, clean energy, Nigeria.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 144763 Disparities versus Similarities: WHO GPPQCL and ISO/IEC 17025:2017 International Standards for Quality Management Systems in Pharmaceutical Laboratories
Authors: M. A. Okezue, K. L. Clase, S. R. Byrn, P. Shivanand
Abstract:
Medicines regulatory authorities expect pharmaceutical companies and contract research organizations to seek ways to certify that their laboratory control measurements are reliable. Establishing and maintaining laboratory quality standards are essential in ensuring the accuracy of test results. ‘ISO/IEC 17025:2017’ and ‘WHO Good Practices for Pharmaceutical Quality Control Laboratories (GPPQCL)’ are two quality standards commonly employed in developing laboratory quality systems. A review was conducted on the two standards to elaborate on areas on convergence and divergence. The goal was to understand how differences in each standard's requirements may influence laboratories' choices as to which document is easier to adopt for quality systems. A qualitative review method compared similar items in the two standards while mapping out areas where there were specific differences in the requirements of the two documents. The review also provided a detailed description of the clauses and parts covering management and technical requirements in these laboratory standards. The review showed that both documents share requirements for over ten critical areas covering objectives, infrastructure, management systems, and laboratory processes. There were, however, differences in standard expectations where GPPQCL emphasizes system procedures for planning and future budgets that will ensure continuity. Conversely, ISO 17025 was more focused on the risk management approach to establish laboratory quality systems. Elements in the two documents form common standard requirements to assure the validity of laboratory test results that promote mutual recognition. The ISO standard currently has more global patronage than GPPQCL.
Keywords: ISO/IEC 17025:2017, laboratory standards, quality control, WHO GPPQCL
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112462 Guidelines for Developing, Supervising, Assessing and Evaluating Capstone Design Project of BSc in Electrical and Electronic Engineering Program
Authors: Muhibul Haque Bhuyan
Abstract:
Inclusion of any design project in an undergraduate electrical and electronic engineering curriculum and producing creative ideas in the final year capstone design projects have received numerous comments at the Board of Accreditation for Engineering and Technical Education (BAETE) several times by the mentors and visiting program evaluator team members at different public and private universities in Bangladesh. To eradicate this deficiency which is needed for getting the program accreditation, a thorough change was required in the Department of Electrical and Electronic Engineering (EEE) for its BSc in EEE program at Southeast University, Dhaka, Bangladesh. We suggested making changes in the course curriculum titles and contents, emphasizing to include capstone design projects, question setting, examining students through other standard methods, selecting and retaining Outcome-Based Education (OBE)-oriented engineering faculty members, improving laboratories through purchasing new equipment and software as well as developing new experiments for each laboratory courses, and engaging the students to practical designs in various courses and final year projects. This paper reports on capstone design project course objectives, course outcomes, mapping with the program outcomes, cognitive domain of learning, assessment schemes, guidelines, suggestions and recommendations for supervision processes, assessment strategy, and rubric setting, etc. It is expected that this will substantially improve the capstone design projects offering, supervision, and assessment in the undergraduate EEE program to fulfill the arduous requirements of BAETE accreditation based on OBE.
Keywords: Course outcome, capstone design project, assessment and evaluation, electrical and electronic engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52561 Acceleration-Based Motion Model for Visual SLAM
Authors: Daohong Yang, Xiang Zhang, Wanting Zhou, Lei Li
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) is a technology that gathers information about the surrounding environment to ascertain its own position and create a map. It is widely used in computer vision, robotics, and various other fields. Many visual SLAM systems, such as OBSLAM3, utilize a constant velocity motion model. The utilization of this model facilitates the determination of the initial pose of the current frame, thereby enhancing the efficiency and precision of feature matching. However, it is often difficult to satisfy the constant velocity motion model in actual situations. This can result in a significant deviation between the obtained initial pose and the true value, leading to errors in nonlinear optimization results. Therefore, this paper proposes a motion model based on acceleration that can be applied to most SLAM systems. To provide a more accurate description of the camera pose acceleration, we separate the pose transformation matrix into its rotation matrix and translation vector components. The rotation matrix is now represented by a rotation vector. We assume that, over a short period, the changes in rotating angular velocity and translation vector remain constant. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of the constant velocity model is analyzed theoretically. Finally, we apply our proposed approach to the ORBSLAM3 system and evaluate two sets of sequences from the TUM datasets. The results show that our proposed method has a more accurate initial pose estimation, resulting in an improvement of 6.61% and 6.46% in the accuracy of the ORBSLAM3 system on the two test sequences, respectively.
Keywords: Error estimation, constant acceleration motion model, pose estimation, visual SLAM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25260 An Intelligent Cascaded Fuzzy Logic Based Controller for Controlling the Room Temperature in Hydronic Heating System
Authors: Vikram Jeganathan, A. V. Sai Balasubramanian, N. Ravi Shankar, S. Subbaraman, R. Rengaraj
Abstract:
Heating systems are a necessity for regions which brace extreme cold weather throughout the year. To maintain a comfortable temperature inside a given place, heating systems making use of- Hydronic boilers- are used. The principle of a single pipe system serves as a base for their working. It is mandatory for these heating systems to control the room temperature, thus maintaining a warm environment. In this paper, the concept of regulation of the room temperature over a wide range is established by using an Adaptive Fuzzy Controller (AFC). This fuzzy controller automatically detects the changes in the outside temperatures and correspondingly maintains the inside temperature to a palatial value. Two separate AFC's are put to use to carry out this function: one to determine the quantity of heat needed to reach the prospective temperature required and to set the desired temperature; the other to control the position of the valve, which is directly proportional to the error between the present room temperature and the user desired temperature. The fuzzy logic controls the position of the valve as per the requirement of the heat. The amount by which the valve opens or closes is controlled by 5 knob positions, which vary from minimum to maximum, thereby regulating the amount of heat flowing through the valve. For the given test system data, different de-fuzzifier methods have been implemented and the results are compared. In order to validate the effectiveness of the proposed approach, a fuzzy controller has been designed by obtaining a test data from a real time system. The simulations are performed in MATLAB and are verified with standard system data. The proposed approach can be implemented for real time applications.Keywords: Adaptive fuzzy controller, Hydronic heating system
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197759 Numerical Investigation of Unsteady MHD Flow of Second Order Fluid in a Tube of Elliptical Cross-Section on the Porous Boundary
Authors: S. B. Kulkarni, Hasim A. Chikte, V. Murali Mohan
Abstract:
Exact solution of an unsteady MHD flow of elasticoviscous fluid through a porous media in a tube of elliptic cross section under the influence of magnetic field and constant pressure gradient has been obtained in this paper. Initially, the flow is generated by a constant pressure gradient. After attaining the steady state, the pressure gradient is suddenly withdrawn and the resulting fluid motion in a tube of elliptical cross section by taking into account of the porosity factor and magnetic parameter of the bounding surface is investigated. The problem is solved in two-stages the first stage is a steady motion in tube under the influence of a constant pressure gradient, the second stage concern with an unsteady motion. The problem is solved employing separation of variables technique. The results are expressed in terms of a non-dimensional porosity parameter, magnetic parameter and elastico-viscosity parameter, which depends on the Non-Newtonian coefficient. The flow parameters are found to be identical with that of Newtonian case as elastic-viscosity parameter, magnetic parameter tends to zero, and porosity tends to infinity. The numerical results were simulated in MATLAB software to analyze the effect of Elastico-viscous parameter, porosity parameter, and magnetic parameter on velocity profile. Boundary conditions were satisfied. It is seen that the effect of elastico-viscosity parameter, porosity parameter and magnetic parameter of the bounding surface has significant effect on the velocity parameter.Keywords: Elastico-viscous fluid, Porous media, Elliptic cross-section, Magnetic parameter, Numerical Simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181658 Long Term Examination of the Profitability Estimation Focused on Benefits
Authors: Stephan Printz, Kristina Lahl, René Vossen, Sabina Jeschke
Abstract:
Strategic investment decisions are characterized by high innovation potential and long-term effects on the competitiveness of enterprises. Due to the uncertainty and risks involved in this complex decision making process, the need arises for well-structured support activities. A method that considers cost and the long-term added value is the cost-benefit effectiveness estimation. One of those methods is the “profitability estimation focused on benefits – PEFB”-method developed at the Institute of Management Cybernetics at RWTH Aachen University. The method copes with the challenges associated with strategic investment decisions by integrating long-term non-monetary aspects whilst also mapping the chronological sequence of an investment within the organization’s target system. Thus, this method is characterized as a holistic approach for the evaluation of costs and benefits of an investment. This participation-oriented method was applied to business environments in many workshops. The results of the workshops are a library of more than 96 cost aspects, as well as 122 benefit aspects. These aspects are preprocessed and comparatively analyzed with regards to their alignment to a series of risk levels. For the first time, an accumulation and a distribution of cost and benefit aspects regarding their impact and probability of occurrence are given. The results give evidence that the PEFB-method combines precise measures of financial accounting with the incorporation of benefits. Finally, the results constitute the basics for using information technology and data science for decision support when applying within the PEFB-method.Keywords: Cost-benefit analysis, multi-criteria decision, profitability estimation focused on benefits, risk and uncertainty analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150057 Technical and Economic Analysis of Smart Micro-Grid Renewable Energy Systems: An Applicable Case Study
Authors: M. A. Fouad, M. A. Badr, Z. S. Abd El-Rehim, Taher Halawa, Mahmoud Bayoumi, M. M. Ibrahim
Abstract:
Renewable energy-based micro-grids are presently attracting significant consideration. The smart grid system is presently considered a reliable solution for the expected deficiency in the power required from future power systems. The purpose of this study is to determine the optimal components sizes of a micro-grid, investigating technical and economic performance with the environmental impacts. The micro grid load is divided into two small factories with electricity, both on-grid and off-grid modes are considered. The micro-grid includes photovoltaic cells, back-up diesel generator wind turbines, and battery bank. The estimated load pattern is 76 kW peak. The system is modeled and simulated by MATLAB/Simulink tool to identify the technical issues based on renewable power generation units. To evaluate system economy, two criteria are used: the net present cost and the cost of generated electricity. The most feasible system components for the selected application are obtained, based on required parameters, using HOMER simulation package. The results showed that a Wind/Photovoltaic (W/PV) on-grid system is more economical than a Wind/Photovoltaic/Diesel/Battery (W/PV/D/B) off-grid system as the cost of generated electricity (COE) is 0.266 $/kWh and 0.316 $/kWh, respectively. Considering the cost of carbon dioxide emissions, the off-grid will be competitive to the on-grid system as COE is found to be (0.256 $/kWh, 0.266 $/kWh), for on and off grid systems.
Keywords: Optimum energy systems, renewable energy sources, smart grid, micro-grid system, on- grid system, off-grid system, modeling and simulation, economical evaluation, net present value, cost of energy, environmental impacts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 242556 A Set Theory Based Factoring Technique and Its Use for Low Power Logic Design
Authors: Padmanabhan Balasubramanian, Ryuta Arisaka
Abstract:
Factoring Boolean functions is one of the basic operations in algorithmic logic synthesis. A novel algebraic factorization heuristic for single-output combinatorial logic functions is presented in this paper and is developed based on the set theory paradigm. The impact of factoring is analyzed mainly from a low power design perspective for standard cell based digital designs in this paper. The physical implementation of a number of MCNC/IWLS combinational benchmark functions and sub-functions are compared before and after factoring, based on a simple technology mapping procedure utilizing only standard gate primitives (readily available as standard cells in a technology library) and not cells corresponding to optimized complex logic. The power results were obtained at the gate-level by means of an industry-standard power analysis tool from Synopsys, targeting a 130nm (0.13μm) UMC CMOS library, for the typical case. The wire-loads were inserted automatically and the simulations were performed with maximum input activity. The gate-level simulations demonstrate the advantage of the proposed factoring technique in comparison with other existing methods from a low power perspective, for arbitrary examples. Though the benchmarks experimentation reports mixed results, the mean savings in total power and dynamic power for the factored solution over a non-factored solution were 6.11% and 5.85% respectively. In terms of leakage power, the average savings for the factored forms was significant to the tune of 23.48%. The factored solution is expected to better its non-factored counterpart in terms of the power-delay product as it is well-known that factoring, in general, yields a delay-efficient multi-level solution.
Keywords: Factorization, Set theory, Logic function, Standardcell based design, Low power.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179155 Capacity Building for Hazmat Transport Emergency Preparedness: 'Hotspot Impact Zone' Mapping from Flammable and Toxic Releases
Authors: U K Chakrabarti, Jigisha Parikh
Abstract:
Hazardous Material transportation by road is coupled with inherent risk of accidents causing loss of lives, grievous injuries, property losses and environmental damages. The most common type of hazmat road accident happens to be the releases (78%) of hazardous substances, followed by fires (28%), explosions (14%) and vapour/ gas clouds (6 %.). The paper is discussing initially the probable 'Impact Zones' likely to be caused by one flammable (LPG) and one toxic (ethylene oxide) chemicals being transported through a sizable segment of a State Highway connecting three notified Industrial zones in Surat district in Western India housing 26 MAH industrial units. Three 'hotspots' were identified along the highway segment depending on the particular chemical traffic and the population distribution within 500 meters on either sides. The thermal radiation and explosion overpressure have been calculated for LPG / Ethylene Oxide BLEVE scenarios along with toxic release scenario for ethylene oxide. Besides, the dispersion calculations for ethylene oxide toxic release have been made for each 'hotspot' location and the impact zones have been mapped for the LOC concentrations. Subsequently, the maximum Initial Isolation and the protective zones were calculated based on ERPG-3 and ERPG-2 values of ethylene oxide respectively which are estimated taking the worst case scenario under worst weather conditions. The data analysis will be helpful to the local administration in capacity building with respect to rescue / evacuation and medical preparedness and quantitative inputs to augment the District Offsite Emergency Plan document.Keywords: Hotspot, Ethylene Oxide, LPG, MAH (MajorAccident Hazard).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180554 Technical Aspects of Closing the Loop in Depth-of-Anesthesia Control
Authors: Gorazd Karer
Abstract:
When performing a diagnostic procedure or surgery in general anesthesia (GA), a proper introduction and dosing of anesthetic agents is one of the main tasks of the anesthesiologist. That being said, depth of anesthesia (DoA) also seems to be a suitable process for closed-loop control implementation. To implement such a system, one must be able to acquire the relevant signals online and in real-time, as well as stream the calculated control signal to the infusion pump. However, during a procedure, patient monitors and infusion pumps are purposely unable to connect to an external (possibly medically unapproved) device for safety reasons, thus preventing closed-loop control. This paper proposes a conceptual solution to the aforementioned problem. First, it presents some important aspects of contemporary clinical practice. Next, it introduces the closed-loop-control-system structure and the relevant information flow. Focusing on transferring the data from the patient to the computer, it presents a non-invasive image-based system for signal acquisition from a patient monitor for online depth-of-anesthesia assessment. Furthermore, it introduces a User-Datagram-Protocol-based (UDP-based) communication method that can be used for transmitting the calculated anesthetic inflow to the infusion pump. The proposed system is independent of medical-device manufacturer and is implemented in MATLAB-Simulink, which can be conveniently used for DoA control implementation. The proposed scheme has been tested in a simulated GA setting and is ready to be evaluated in an operating theatre. However, the proposed system is only a step towards a proper closed-loop control system for DoA, which could routinely be used in clinical practice.
Keywords: Closed-loop control, Depth of Anesthesia, DoA, optical signal acquisition, Patient State index, PSi, UDP communication protocol.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52453 A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program
Authors: A. Esmaili Torshabi, A. Terakawa, K. Ishii, H. Yamazaki, S. Matsuyama, Y. Kikuchi, M. Nakhostin, H. Sabet, A. Ishizaki, W. Yamashita, T. Togashi, J. Arikawa, H. Akiyama, K. Koyata
Abstract:
The purpose of this study is to introduce a new interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues in proton therapy. This interface program was developed under MATLAB software and includes a friendly graphical user interface with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton beam. The result of the mentioned technique is a number of nonoverlapped squares with different sizes in every image. By this way the resolution of image segmentation is high enough in and near heterogeneous areas to preserve the precision of dose calculations and is low enough in homogeneous areas to reduce the number of cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.Keywords: Monte Carlo, CT images, Quadtree decomposition, Interface program, Proton beam
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 186952 Exploring SL Writing and SL Sensitivity during Writing Tasks: Poor and Advanced Writing in a Context of Second Language Other than English
Authors: S. Figueiredo, M. Alves Martins, C. Silva, C. Simões
Abstract:
This study integrates a larger research empirical project that examines second language (SL) learners’ profiles and valid procedures to perform complete and diagnostic assessment in schools. 102 learners of Portuguese as a SL aged 7 and 17 years speakers of distinct home languages were assessed in several linguistic tasks. In this article, we focused on writing performance in the specific task of narrative essay composition. The written outputs were measured using the score in six components adapted from an English SL assessment context (Alberta Education): linguistic vocabulary, grammar, syntax, strategy, socio-linguistic, and discourse. The writing processes and strategies in Portuguese language used by different immigrant students were analysed to determine features and diversity of deficits on authentic texts performed by SL writers. Differentiated performance was based on the diversity of the following variables: grades, previous schooling, home language, instruction in first language, and exposure to Portuguese as Second Language. Indo-Aryan languages speakers showed low writing scores compared to their peers and the type of language and respective cognitive mapping (such as Mandarin and Arabic) was the predictor, not linguistic distance. Home language instruction should also be prominently considered in further research to understand specificities of cognitive academic profile in a Romance languages learning context. Additionally, this study also examined the teachers’ representations that will be here addressed to understand educational implications of second language teaching in psychological distress of different minorities in schools of specific host countries.Keywords: Second language, writing assessment, home language, immigrant students, Portuguese language.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 195851 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution
Authors: Nikolay P. Brayanov, Anna V. Stoynova
Abstract:
Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.Keywords: Embedded code generation, embedded C code quality, embedded systems, model-based development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 100950 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil
Authors: M. Seguini, D. Nedjar
Abstract:
An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.Keywords: Finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 194449 Power Factor Correction Based on High Switching Frequency Resonant Power Converter
Authors: B. Sathyanandhi, P. M. Balasubramaniam
Abstract:
This paper presents Buck-Boost converter topology to maintain the input power factor by using the power factor stage control and regulation stage control. Suppose, if we are using the RL load the power factor will be reduced due to the presence of total harmonic distortion in the current wave. To improve the power factor the current waveform should follow the fundamental component of the voltage waveform. These can be achieved by using the high -frequency power converter. Based on the resonant circuit the converter is able to perform the function of Buck, Boost, and buck-boost converter. Here ,we have used Buck-Boost converter, because, the buck-boost converter has more advantages than the boost converter. Here the switching action of the power converter can take place by using the external zero comparator PFC stage control. The power converter consisting of the resonant circuit which is used to control the output voltage gain of the converter. The power converter is operated at a very high switching frequency in the range of 400KHz in order to overcome the switching losses of the power converter. Due to presence of high switching frequency, the power factor will improve. Therefore, the total harmonics distortion present in the current waveform has also reduced. These results has generated in the form of simulation by using MATLAB/SIMULINK software. Similar to the Buck and Boost converters, the operation of the Buck-Boost has best understood, in terms of the inductor's "reluctance" for allowing rapid change in current, which also reduces the Total Harmonic Distortion (THD) in the input current waveform, which can improve the input Power factor, based on the type of load used.
Keywords: Buck-boost converter, High switching frequency, Power factor correction, power factor correction stage Regulation stage, Total harmonic distortion (THD).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136248 Dynamic Simulation of a Hybrid Wind Farm with Wind Turbines and Distributed Compressed Air Energy Storage System
Authors: Eronini Umez-Eronini
Abstract:
Compressed air energy storage (CAES) coupled with wind farms have gained attention as a means to address the intermittency and variability of wind power. However, most existing studies and implementations focus on bulk or centralized CAES plants. This study presents a dynamic model of a hybrid wind farm with distributed CAES, using air storage tanks and compressor and expander trains at each wind turbine station. It introduces the concept of a distributed CAES with linked air cooling and heating, and presents an approach to scheduling and regulating the production of compressed air and power in such a system. Mathematical models of the dynamic components of this hybrid wind farm system, including a simple transient wake field model, were developed and simulated using MATLAB, with real wind data and Transmission System Operator (TSO) absolute power reference signals as inputs. The simulation results demonstrate that the proposed ad hoc supervisory controller is able to track the minute-scale power demand signal within an error band size comparable to the electrical power rating of a single expander. This suggests that combining the global distributed CAES control with power regulation for individual wind turbines could further improve the system’s performance. The round trip electrical storage efficiency computed for the distributed CAES was also in the range of reported round trip storage electrical efficiencies for improved bulk CAES. These findings contribute to the enhancement of efficiency of wind farms without access to large-scale storage or underground caverns.
Keywords: Distributed CAES, compressed air, energy storage, hybrid wind farm, wind turbines, dynamic simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7747 Application of Remote Sensing for Monitoring the Impact of Lapindo Mud Sedimentation for Mangrove Ecosystem: Case Study in Sidoarjo, East Java
Authors: Akbar Cahyadhi Pratama Putra, Tantri Utami Widhaningtyas, M. Randy Aswin
Abstract:
Indonesia, as an archipelagic nation, has a very long coastline with significant potential for marine resources, including mangrove ecosystems. The Lapindo mudflow disaster in Sidoarjo, East Java, resulted in mudflow being discharged into the sea through the Brantas and Porong rivers. The mud material transported by the river flow is feared to be dangerous because it contains harmful substances such as heavy metals. This study aims to map the mangrove ecosystem in terms of its density and assess the impact of the Lapindo mud disaster on the mangrove ecosystem, along with efforts to sustain its continuity. The mapping of the coastal mangrove conditions in Sidoarjo was carried out using remote sensing products, specifically Landsat 7 ETM+ images, taken during dry months in 2002, 2006, 2009, and 2014. The density of mangroves was determined using NDVI, which utilizes band 3 (the red channel) and band 4 (the near IR channel). Image processing to generate NDVI was performed using ENVI 5.1 software. The NDVI results were used to assess mangrove density on a scale from 0 to 1. The growth of mangrove ecosystems, both in terms of area and density, showed a significant increase from year to year. The development of mangrove ecosystems was influenced by the deposition of Lapindo mud in the estuaries of the Porong and Brantas rivers, where the silt provided a suitable medium for the growth of the mangrove ecosystem, leading to an increase in its density. The rise in density was supported by public awareness to mitigate heavy metal contamination, allowing for mangrove breeding near the affected areas.
Keywords: Archipelagic nation, Mangrove, Lapindo mudflow disaster, NDVI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16946 A Neuroscience-Based Learning Technique: Framework and Application to STEM
Authors: Dante J. Dorantes-González, Aldrin Balsa-Yepes
Abstract:
Existing learning techniques such as problem-based learning, project-based learning, or case study learning are learning techniques that focus mainly on technical details, but give no specific guidelines on learner’s experience and emotional learning aspects such as arousal salience and valence, being emotional states important factors affecting engagement and retention. Some approaches involving emotion in educational settings, such as social and emotional learning, lack neuroscientific rigorousness and use of specific neurobiological mechanisms. On the other hand, neurobiology approaches lack educational applicability. And educational approaches mainly focus on cognitive aspects and disregard conditioning learning. First, authors start explaining the reasons why it is hard to learn thoughtfully, then they use the method of neurobiological mapping to track the main limbic system functions, such as the reward circuit, and its relations with perception, memories, motivations, sympathetic and parasympathetic reactions, and sensations, as well as the brain cortex. The authors conclude explaining the major finding: The mechanisms of nonconscious learning and the triggers that guarantee long-term memory potentiation. Afterward, the educational framework for practical application and the instructors’ guidelines are established. An implementation example in engineering education is given, namely, the study of tuned-mass dampers for earthquake oscillations attenuation in skyscrapers. This work represents an original learning technique based on nonconscious learning mechanisms to enhance long-term memories that complement existing cognitive learning methods.
Keywords: Emotion, emotion-enhanced memory, learning technique, STEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101545 High Sensitivity Crack Detection and Locating with Optimized Spatial Wavelet Analysis
Authors: A. Ghanbari Mardasi, N. Wu, C. Wu
Abstract:
In this study, a spatial wavelet-based crack localization technique for a thick beam is presented. Wavelet scale in spatial wavelet transformation is optimized to enhance crack detection sensitivity. A windowing function is also employed to erase the edge effect of the wavelet transformation, which enables the method to detect and localize cracks near the beam/measurement boundaries. Theoretical model and vibration analysis considering the crack effect are first proposed and performed in MATLAB based on the Timoshenko beam model. Gabor wavelet family is applied to the beam vibration mode shapes derived from the theoretical beam model to magnify the crack effect so as to locate the crack. Relative wavelet coefficient is obtained for sensitivity analysis by comparing the coefficient values at different positions of the beam with the lowest value in the intact area of the beam. Afterward, the optimal wavelet scale corresponding to the highest relative wavelet coefficient at the crack position is obtained for each vibration mode, through numerical simulations. The same procedure is performed for cracks with different sizes and positions in order to find the optimal scale range for the Gabor wavelet family. Finally, Hanning window is applied to different vibration mode shapes in order to overcome the edge effect problem of wavelet transformation and its effect on the localization of crack close to the measurement boundaries. Comparison of the wavelet coefficients distribution of windowed and initial mode shapes demonstrates that window function eases the identification of the cracks close to the boundaries.
Keywords: Edge effect, scale optimization, small crack locating, spatial wavelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 95144 Performance Analysis of Modified Solar Water Heating System for Climatic Condition of Allahabad, India
Authors: Kirti Tewari, Rahul Dev
Abstract:
Solar water heating is a thermodynamic process of heating water using sunlight with the help of solar water heater. Thus, solar water heater is a device used to harness solar energy. In this paper, a modified solar water heating system (MSWHS) has been proposed over flat plate collector (FPC) and Evacuated tube collector (ETC). The modifications include selection of materials other than glass, and glass wool which are conventionally used for fabricating FPC and ETC. Some modifications in design have also been proposed. Its collector is made of double layer of semi-cylindrical acrylic tubes and fibre reinforced plastic (FRP) insulation base. Water tank is made of double layer of acrylic sheet except base and north wall. FRP is used in base and north wall of the water tank. A concept of equivalent thickness has been utilised for calculating the dimensions of collector plate, acrylic tube and tank. A thermal model for the proposed design of MSWHS is developed and simulation is carried out on MATLAB for the capacity of 200L MSWHS having collector area of 1.6 m2, length of acrylic tubes of 2m at an inclination angle 25° which is taken nearly equal to the latitude of the given location. Latitude of Allahabad is 24.45° N. The results show that the maximum temperature of water in tank and tube has been found to be 71.2°C and 73.3°C at 17:00hr and 16:00hr respectively in March for the climatic data of Allahabad. Theoretical performance analysis has been carried out by varying number of tubes of collector, the tank capacity and climatic data for given months of winter and summer.Keywords: Acrylic, Fibre reinforced plastic, Solar water Heating, Thermal model, Conventional water heaters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 219543 Selective Encryption using ISMA Cryp in Real Time Video Streaming of H.264/AVC for DVB-H Application
Authors: Jay M. Joshi, Upena D. Dalal
Abstract:
Multimedia information availability has increased dramatically with the advent of video broadcasting on handheld devices. But with this availability comes problems of maintaining the security of information that is displayed in public. ISMA Encryption and Authentication (ISMACryp) is one of the chosen technologies for service protection in DVB-H (Digital Video Broadcasting- Handheld), the TV system for portable handheld devices. The ISMACryp is encoded with H.264/AVC (advanced video coding), while leaving all structural data as it is. Two modes of ISMACryp are available; the CTR mode (Counter type) and CBC mode (Cipher Block Chaining) mode. Both modes of ISMACryp are based on 128- bit AES algorithm. AES algorithms are more complex and require larger time for execution which is not suitable for real time application like live TV. The proposed system aims to gain a deep understanding of video data security on multimedia technologies and to provide security for real time video applications using selective encryption for H.264/AVC. Five level of security proposed in this paper based on the content of NAL unit in Baseline Constrain profile of H.264/AVC. The selective encryption in different levels provides encryption of intra-prediction mode, residue data, inter-prediction mode or motion vectors only. Experimental results shown in this paper described that fifth level which is ISMACryp provide higher level of security with more encryption time and the one level provide lower level of security by encrypting only motion vectors with lower execution time without compromise on compression and quality of visual content. This encryption scheme with compression process with low cost, and keeps the file format unchanged with some direct operations supported. Simulation was being carried out in Matlab.Keywords: AES-128, CAVLC, H.264, ISMACryp
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 204942 Spatial Mapping of Dengue Incidence: A Case Study in Hulu Langat District, Selangor, Malaysia
Authors: Er, A. C., Rosli, M. H., Asmahani A., Mohamad Naim M. R., Harsuzilawati M.
Abstract:
Dengue is a mosquito-borne infection that has peaked to an alarming rate in recent decades. It can be found in tropical and sub-tropical climate. In Malaysia, dengue has been declared as one of the national health threat to the public. This study aimed to map the spatial distributions of dengue cases in the district of Hulu Langat, Selangor via a combination of Geographic Information System (GIS) and spatial statistic tools. Data related to dengue was gathered from the various government health agencies. The location of dengue cases was geocoded using a handheld GPS Juno SB Trimble. A total of 197 dengue cases occurring in 2003 were used in this study. Those data then was aggregated into sub-district level and then converted into GIS format. The study also used population or demographic data as well as the boundary of Hulu Langat. To assess the spatial distribution of dengue cases three spatial statistics method (Moran-s I, average nearest neighborhood (ANN) and kernel density estimation) were applied together with spatial analysis in the GIS environment. Those three indices were used to analyze the spatial distribution and average distance of dengue incidence and to locate the hot spot of dengue cases. The results indicated that the dengue cases was clustered (p < 0.01) when analyze using Moran-s I with z scores 5.03. The results from ANN analysis showed that the average nearest neighbor ratio is less than 1 which is 0.518755 (p < 0.0001). From this result, we can expect the dengue cases pattern in Hulu Langat district is exhibiting a cluster pattern. The z-score for dengue incidence within the district is -13.0525 (p < 0.0001). It was also found that the significant spatial autocorrelation of dengue incidences occurs at an average distance of 380.81 meters (p < 0.0001). Several locations especially residential area also had been identified as the hot spots of dengue cases in the district.
Keywords: Dengue, geographic information system (GIS), spatial analysis, spatial statistics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 536941 A New Distribution Network Reconfiguration Approach using a Tree Model
Authors: E. Dolatdar, S. Soleymani, B. Mozafari
Abstract:
Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.
Keywords: Distribution System, Reconfiguration, Loss Reduction , Graph Theory , Optimization , Genetic Algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 378240 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems
Authors: Bassam Istanbouli
Abstract:
With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them. In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies; the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system. Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.
Keywords: Blueprint, ERP, SDLC, Modular.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39739 Surface Thermodynamics Approach to Mycobacterium tuberculosis (M-TB) – Human Sputum Interactions
Authors: J. L. Chukwuneke, C. H. Achebe, S. N. Omenyi
Abstract:
This research work presents the surface thermodynamics approach to M-TB/HIV-Human sputum interactions. This involved the use of the Hamaker coefficient concept as a surface energetics tool in determining the interaction processes, with the surface interfacial energies explained using van der Waals concept of particle interactions. The Lifshitz derivation for van der Waals forces was applied as an alternative to the contact angle approach which has been widely used in other biological systems. The methodology involved taking sputum samples from twenty infected persons and from twenty uninfected persons for absorbance measurement using a digital Ultraviolet visible Spectrophotometer. The variables required for the computations with the Lifshitz formula were derived from the absorbance data. The Matlab software tools were used in the mathematical analysis of the data produced from the experiments (absorbance values). The Hamaker constants and the combined Hamaker coefficients were obtained using the values of the dielectric constant together with the Lifshitz Equation. The absolute combined Hamaker coefficients A132abs and A131abs on both infected and uninfected sputum samples gave the values of A132abs = 0.21631x10-21Joule for M-TB infected sputum and Ã132abs = 0.18825x10-21Joule for M-TB/HIV infected sputum. The significance of this result is the positive value of the absolute combined Hamaker coefficient which suggests the existence of net positive van der waals forces demonstrating an attraction between the bacteria and the macrophage. This however, implies that infection can occur. It was also shown that in the presence of HIV, the interaction energy is reduced by 13% conforming adverse effects observed in HIV patients suffering from tuberculosis.Keywords: Absorbance, dielectric constant, Hamaker coefficient, Lifshitz formula, macrophage, Mycobacterium tuberculosis, Van der Waals forces.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 177538 Using Artificial Neural Network and Leudeking-Piret Model in the Kinetic Modeling of Microbial Production of Poly-β- Hydroxybutyrate
Authors: A.Qaderi, A. Heydarinasab, M. Ardjmand
Abstract:
Poly-β-hydroxybutyrate (PHB) is one of the most famous biopolymers that has various applications in production of biodegradable carriers. The most important strategy for enhancing efficiency in production process and reducing the price of PHB, is the accurate expression of kinetic model of products formation and parameters that are effective on it, such as Dry Cell Weight (DCW) and substrate consumption. Considering the high capabilities of artificial neural networks in modeling and simulation of non-linear systems such as biological and chemical industries that mainly are multivariable systems, kinetic modeling of microbial production of PHB that is a complex and non-linear biological process, the three layers perceptron neural network model was used in this study. Artificial neural network educates itself and finds the hidden laws behind the data with mapping based on experimental data, of dry cell weight, substrate concentration as input and PHB concentration as output. For training the network, a series of experimental data for PHB production from Hydrogenophaga Pseudoflava by glucose carbon source was used. After training the network, two other experimental data sets that have not intervened in the network education, including dry cell concentration and substrate concentration were applied as inputs to the network, and PHB concentration was predicted by the network. Comparison of predicted data by network and experimental data, indicated a high precision predicted for both fructose and whey carbon sources. Also in present study for better understanding of the ability of neural network in modeling of biological processes, microbial production kinetic of PHB by Leudeking-Piret experimental equation was modeled. The Observed result indicated an accurate prediction of PHB concentration by artificial neural network higher than Leudeking- Piret model.Keywords: Kinetic Modeling, Poly-β-Hydroxybutyrate (PHB), Hydrogenophaga Pseudoflava, Artificial Neural Network, Leudeking-Piret
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 481137 An Intelligent Controller Augmented with Variable Zero Lag Compensation for Antilock Braking System
Authors: Benjamin C. Agwah, Paulinus C. Eze
Abstract:
Antilock braking system (ABS) is one of the important contributions by the automobile industry, designed to ensure road safety in such way that vehicles are kept steerable and stable when during emergency braking. This paper presents a wheel slip-based intelligent controller with variable zero lag compensation for ABS. It is required to achieve a very fast perfect wheel slip tracking during hard braking condition and eliminate chattering with improved transient and steady state performance, while shortening the stopping distance using effective braking torque less than maximum allowable torque to bring a braking vehicle to a stop. The dynamic of a vehicle braking with a braking velocity of 30 ms⁻¹ on a straight line was determined and modelled in MATLAB/Simulink environment to represent a conventional ABS system without a controller. Simulation results indicated that system without a controller was not able to track desired wheel slip and the stopping distance was 135.2 m. Hence, an intelligent control based on fuzzy logic controller (FLC) was designed with a variable zero lag compensator (VZLC) added to enhance the performance of FLC control variable by eliminating steady state error, provide improve bandwidth to eliminate the effect of high frequency noise such as chattering during braking. The simulation results showed that FLC-VZLC provided fast tracking of desired wheel slip, eliminated chattering, and reduced stopping distance by 70.5% (39.92 m), 63.3% (49.59 m), 57.6% (57.35 m) and 50% (69.13 m) on dry, wet, cobblestone and snow road surface conditions respectively. Generally, the proposed system used effective braking torque that is less than the maximum allowable braking torque to achieve efficient wheel slip tracking and overall robust control performance on different road surfaces.
Keywords: ABS, Fuzzy Logic Controller, Variable Zero Lag Compensator, Wheel Slip Tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34336 Geostatistical Analysis and Mapping of Groundlevel Ozone in a Medium Sized Urban Area
Authors: F. J. Moral García, P. Valiente González, F. López Rodríguez
Abstract:
Ground-level tropospheric ozone is one of the air pollutants of most concern. It is mainly produced by photochemical processes involving nitrogen oxides and volatile organic compounds in the lower parts of the atmosphere. Ozone levels become particularly high in regions close to high ozone precursor emissions and during summer, when stagnant meteorological conditions with high insolation and high temperatures are common. In this work, some results of a study about urban ozone distribution patterns in the city of Badajoz, which is the largest and most industrialized city in Extremadura region (southwest Spain) are shown. Fourteen sampling campaigns, at least one per month, were carried out to measure ambient air ozone concentrations, during periods that were selected according to favourable conditions to ozone production, using an automatic portable analyzer. Later, to evaluate the ozone distribution at the city, the measured ozone data were analyzed using geostatistical techniques. Thus, first, during the exploratory analysis of data, it was revealed that they were distributed normally, which is a desirable property for the subsequent stages of the geostatistical study. Secondly, during the structural analysis of data, theoretical spherical models provided the best fit for all monthly experimental variograms. The parameters of these variograms (sill, range and nugget) revealed that the maximum distance of spatial dependence is between 302-790 m and the variable, air ozone concentration, is not evenly distributed in reduced distances. Finally, predictive ozone maps were derived for all points of the experimental study area, by use of geostatistical algorithms (kriging). High prediction accuracy was obtained in all cases as cross-validation showed. Useful information for hazard assessment was also provided when probability maps, based on kriging interpolation and kriging standard deviation, were produced.Keywords: Kriging, map, tropospheric ozone, variogram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 186935 Value Index, a Novel Decision Making Approach for Waste Load Allocation
Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani
Abstract:
Waste load allocation (WLA) policies may use multiobjective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.Keywords: Waste load allocation (WLA), Value index, Multi objective particle swarm optimization (MOPSO), Haraz River, Equity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027