Search results for: parallel particle swarm optimization
3690 Interval Bilevel Linear Fractional Programming
Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi
Abstract:
The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients
Procedia PDF Downloads 4453689 Significant Reduction in Specific CO₂ Emission through Process Optimization at G Blast Furnace, Tata Steel Jamshedpur
Authors: Shoumodip Roy, Ankit Singhania, M. K. G. Choudhury, Santanu Mallick, M. K. Agarwal, R. V. Ramna, Uttam Singh
Abstract:
One of the key corporate goals of Tata Steel company is to demonstrate Environment Leadership. Decreasing specific CO₂ emission is one of the key steps to achieve the stated corporate goal. At any Blast Furnace, specific CO₂ emission is directly proportional to fuel intake. To reduce the fuel intake at G Blast Furnace, an initial benchmarking exercise was carried out with international and domestic Blast Furnaces to determine the potential for improvement. The gap identified during the exercise revealed that the benchmark Blast Furnaces operated with superior raw material quality than that in G Blast Furnace. However, since the raw materials to G Blast Furnace are sourced from the captive mines, improvement in the raw material quality was out of scope. Therefore, trials were taken with different operating regimes, to identify the key process parameters, which on optimization could significantly reduce the fuel intake in G Blast Furnace. The key process parameters identified from the trial were the Stoichiometric Oxygen Ratio, Melting Capacity ratio and the burden distribution inside the furnace. These identified process parameters were optimized to bridge the gap in fuel intake at G Blast Furnace, thereby reducing specific CO₂ emission to benchmark levels. This paradigm shift enabled to lower the fuel intake by 70kg per ton of liquid iron produced, thereby reducing the specific CO₂ emission by 15 percent.Keywords: benchmark, blast furnace, CO₂ emission, fuel rate
Procedia PDF Downloads 2793688 Electricity Sector's Status in Lebanon and Portfolio Optimization for the Future Electricity Generation Scenarios
Authors: Nour Wehbe
Abstract:
The Lebanese electricity sector is at the heart of a deep crisis. Electricity in Lebanon is supplied by Électricité du Liban (EdL) which has to suffer from technical and financial deficiencies for decades and proved to be insufficient and deficient as the demand still exceeds the supply. As a result, backup generation is widespread throughout Lebanon. The sector costs massive government resources and, on top of it, consumers pay massive additional amounts for satisfying their electrical needs. While the developed countries have been investing in renewable energy for the past two decades, the Lebanese government realizes the importance of adopting such energy sourcing strategies for the upgrade of the electricity sector in the country. The diversification of the national electricity generation mix has increased considerably in Lebanon's energy planning agenda, especially that a detailed review of the energy potential in Lebanon has revealed a great potential of solar and wind energy resources, a considerable potential of biomass resource, and an important hydraulic potential in Lebanon. This paper presents a review of the energy status of Lebanon, and illustrates a detailed review of the EDL structure with the existing problems and recommended solutions. In addition, scenarios reflecting implementation of policy projects are presented, and conclusions are drawn on the usefulness of a proposed evaluation methodology and the effectiveness of the adopted new energy policy for the electrical sector in Lebanon.Keywords: EdL Electricite du Liban, portfolio optimization, electricity generation mix, mean-variance approach
Procedia PDF Downloads 2473687 Model Updating Based on Modal Parameters Using Hybrid Pattern Search Technique
Authors: N. Guo, C. Xu, Z. C. Yang
Abstract:
In order to ensure the high reliability of an aircraft, the accurate structural dynamics analysis has become an indispensable part in the design of an aircraft structure. Therefore, the structural finite element model which can be used to accurately calculate the structural dynamics and their transfer relations is the prerequisite in structural dynamic design. A dynamic finite element model updating method is presented to correct the uncertain parameters of the finite element model of a structure using measured modal parameters. The coordinate modal assurance criterion is used to evaluate the correlation level at each coordinate over the experimental and the analytical mode shapes. Then, the weighted summation of the natural frequency residual and the coordinate modal assurance criterion residual is used as the objective function. Moreover, the hybrid pattern search (HPS) optimization technique, which synthesizes the advantages of pattern search (PS) optimization technique and genetic algorithm (GA), is introduced to solve the dynamic FE model updating problem. A numerical simulation and a model updating experiment for GARTEUR aircraft model are performed to validate the feasibility and effectiveness of the present dynamic model updating method, respectively. The updated results show that the proposed method can be successfully used to modify the incorrect parameters with good robustness.Keywords: model updating, modal parameter, coordinate modal assurance criterion, hybrid genetic/pattern search
Procedia PDF Downloads 1593686 Error Estimation for the Reconstruction Algorithm with Fan Beam Geometry
Authors: Nirmal Yadav, Tanuja Srivastava
Abstract:
Shannon theory is an exact method to recover a band limited signals from its sampled values in discrete implementation, using sinc interpolators. But sinc based results are not much satisfactory for band-limited calculations so that convolution with window function, having compact support, has been introduced. Convolution Backprojection algorithm with window function is an approximation algorithm. In this paper, the error has been calculated, arises due to this approximation nature of reconstruction algorithm. This result will be defined for fan beam projection data which is more faster than parallel beam projection.Keywords: computed tomography, convolution backprojection, radon transform, fan beam
Procedia PDF Downloads 4903685 Microwave-Assisted Chemical Pre-Treatment of Waste Sorghum Leaves: Process Optimization and Development of an Intelligent Model for Determination of Volatile Compound Fractions
Authors: Daneal Rorke, Gueguim Kana
Abstract:
The shift towards renewable energy sources for biofuel production has received increasing attention. However, the use and pre-treatment of lignocellulosic material are inundated with the generation of fermentation inhibitors which severely impact the feasibility of bioprocesses. This study reports the profiling of all volatile compounds generated during microwave assisted chemical pre-treatment of sorghum leaves. Furthermore, the optimization of reducing sugar (RS) from microwave assisted acid pre-treatment of sorghum leaves was assessed and gave a coefficient of determination (R2) of 0.76, producing an optimal RS yield of 2.74 g FS/g substrate. The development of an intelligent model to predict volatile compound fractions gave R2 values of up to 0.93 for 21 volatile compounds. Sensitivity analysis revealed that furfural and phenol exhibited high sensitivity to acid concentration, alkali concentration and S:L ratio, while phenol showed high sensitivity to microwave duration and intensity as well. These findings illustrate the potential of using an intelligent model to predict the volatile compound fraction profile of compounds generated during pre-treatment of sorghum leaves in order to establish a more robust and efficient pre-treatment regime for biofuel production.Keywords: artificial neural networks, fermentation inhibitors, lignocellulosic pre-treatment, sorghum leaves
Procedia PDF Downloads 2463684 Mechanism Design and Dynamic Analysis of Active Independent Front Steering System
Authors: Cheng-Chi Yu, Yu-Shiue Wang, Kei-Lin Kuo
Abstract:
Active Independent Front Steering system is a steering system which can according to vehicle driving situation adjusts the relation of steering angle between inner wheel and outer wheel. In low-speed cornering, AIFS sets the steering angles of inner and outer wheel into Ackerman steering geometry to make vehicle has less cornering radius. Besides, AIFS changes the steering geometry to parallel or even anti-Ackerman steering geometry to keep vehicle stability in high-speed cornering. Therefore, based on the analysis of the vehicle steering behavior from different steering geometries, this study develops a new screw type of active independent front steering system to make vehicles best cornering performance at any speeds. The screw type of active independent front steering system keeps the pinion and separates the rack into main rack and second rack. Two racks connect by a screw. Extra screw rotated motion powered by assistant motor through coupler makes second rack move relative to main rack, which can adjust both steering ratio and steering geometry. First of all, this study distinguishes the steering geometry by using Ackerman percentage and utilizes the software of ADAMS/Car to construct diverse steering geometry models. The different steering geometries are compared at low-speed and high-speed cornering, and then control strategies of the active independent front steering systems could be formulated. Secondly, this study applies closed loop equation to analyze tire steering angles and carries out optimization calculations to make the steering geometry from traditional rack and pinion steering system near to Ackerman steering geometry. Steering characteristics of the optimum steering mechanism and motion characteristics of vehicle installed the steering mechanism are verified by ADAMS/Car models of front suspension and full vehicle respectively. By adding dual auxiliary rack and dual motor to the optimum steering mechanism, the active independent front steering system could be developed to achieve the functions of variable steering ratio and variable steering geometry. At last, this study uses ADAMS/Car and Matlab/Simulink to co-simulate the cornering motion of vehicles confirms the vehicle installed the Active Independent Front Steering (AIFS) system has better handling performance than that with Active Independent Steering (AFS) system or with Electric Power Steering (EPS) system. At low-speed cornering, the vehicles with AIFS system and with AFS system have better maneuverability, less cornering radius, than the traditional vehicle with EPS system because that AIFS and AFS systems both provide function of variable steering ratio. However, there is a slight penalty in the motor(s) power consumption. In addition, because of the capability of variable steering geometry, the vehicle with AIFS system has better high-speed cornering stability, trajectory keeping, and even less motor(s) power consumption than that with EPS system and also with AFS system.Keywords: active front steering system, active independent front steering system, steering geometry, steering ratio
Procedia PDF Downloads 1883683 Simulation of Performance and Layout Optimization of Solar Collectors with AVR Microcontroller to Achieve Desired Conditions
Authors: Mohsen Azarmjoo, Navid Sharifi, Zahra Alikhani Koopaei
Abstract:
This article aims to conserve energy and optimize the performance of solar water heaters using modern modeling systems. In this study, a large-scale solar water heater is modeled using an AVR microcontroller, which is a digital processor from the AVR microcontroller family. This mechatronic system will be used to analyze the performance and design of solar collectors, with the ultimate goal of improving the efficiency of the system being used. The findings of this research provide insights into optimizing the performance of solar water heaters. By manipulating the arrangement of solar panels and controlling the water flow through them using the AVR microcontroller, researchers can identify the optimal configurations and operational protocols to achieve the desired temperature and flow conditions. These findings can contribute to the development of more efficient and sustainable heating and cooling systems. This article investigates the optimization of solar water heater performance. It examines the impact of solar panel layout on system efficiency and explores methods of controlling water flow to achieve the desired temperature and flow conditions. The results of this research contribute to the development of more sustainable heating and cooling systems that rely on renewable energy sources.Keywords: energy conservation, solar water heaters, solar cooling, simulation, mechatronics
Procedia PDF Downloads 823682 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 2943681 Evaluation of Teaching Team Stress Factors in Two Engineering Education Programs
Authors: Kari Bjorn
Abstract:
Team learning has been studied and modeled as double loop model and its variations. Also, metacognition has been suggested as a concept to describe the nature of team learning to be more than a simple sum of individual learning of the team members. Team learning has a positive correlation with both individual motivation of its members, as well as the collective factors within the team. Team learning of previously very independent members of two teaching teams is analyzed. Applied Science Universities are training future professionals with ever more diversified and multidisciplinary skills. The size of the units of teaching and learning are increasingly larger for several reasons. First, multi-disciplinary skill development requires more active learning and richer learning environments and learning experiences. This occurs on students teams. Secondly, teaching of multidisciplinary skills requires a multidisciplinary and team-based teaching from the teachers as well. Team formation phases have been identifies and widely accepted. Team role stress has been analyzed in project teams. Projects typically have a well-defined goal and organization. This paper explores team stress of two teacher teams in a parallel running two course units in engineering education. The first is an Industrial Automation Technology and the second is Development of Medical Devices. The courses have a separate student group, and they are in different campuses. Both are run in parallel within 8 week time. Both of them are taught by a group of four teachers with several years of teaching experience, but individually. The team role stress scale items - the survey is done to both teaching groups at the beginning of the course and at the end of the course. The inventory of questions covers the factors of ambiguity, conflict, quantitative role overload and qualitative role overload. Some comparison to the study on project teams can be drawn. Team development stage of the two teaching groups is different. Relating the team role stress factors to the development stage of the group can reveal the potential of management actions to promote team building and to understand the maturity of functional and well-established teams. Mature teams indicate higher job satisfaction and deliver higher performance. Especially, teaching teams who deliver highly intangible results of learning outcome are sensitive to issues in the job satisfaction and team conflicts. Because team teaching is increasing, the paper provides a review of the relevant theories and initial comparative and longitudinal results of the team role stress factors applied to teaching teams.Keywords: engineering education, stress, team role, team teaching
Procedia PDF Downloads 2243680 The Importance of Downstream Supply Chain in Supply Chain Risk Management: Multi-Objective Optimization
Authors: Zohreh Khojasteh-Ghamari, Takashi Irohara
Abstract:
One of the efficient ways in supply chain risk management is avoiding the interruption in Supply Chain (SC) before it occurs. Although the majority of the organizations focus on their first-tier suppliers to avoid risk in the SC, studies show that in only 60 percent of the disruption cases the reason is first tier suppliers. In the 40 percent of the SC disruptions, the reason is downstream SC, which is the second tier and lower. Due to the increasing complexity and interrelation of modern supply chains, the SC elements have become difficult to trace. Moreover, studies show that there is a vital need to better understand the integration of risk and visibility, especially in the context of multiple objectives. In this study, we propose a multi-objective programming model to avoid disruption in SC. The objective of this study is evaluating the effect of downstream SCV on managing supply chain risk. We propose a multi-objective mathematical programming model with the objective functions of minimizing the total cost and maximizing the downstream supply chain visibility (SCV). The decision variable is supplier selection. We assume there are several manufacturers and several candidate suppliers. For each manufacturer, our model proposes the best suppliers with the lowest cost and maximum visibility in downstream supply chain. We examine the applicability of the model by numerical examples. We also define several scenarios for datasets and observe the tendency. The results show that minimum visibility in downstream SC is needed to have a safe SC network.Keywords: downstream supply chain, optimization, supply chain risk, supply chain visibility
Procedia PDF Downloads 2433679 Statistical Modeling of Constituents in Ash Evolved From Pulverized Coal Combustion
Authors: Esam Jassim
Abstract:
Industries using conventional fossil fuels have an interest in better understanding the mechanism of particulate formation during combustion since such is responsible for emission of undesired inorganic elements that directly impact the atmospheric pollution level. Fine and ultrafine particulates have tendency to escape the flue gas cleaning devices to the atmosphere. They also preferentially collect on surfaces in power systems resulting in ascending in corrosion inclination, descending in the heat transfer thermal unit, and severe impact on human health. This adverseness manifests particularly in the regions of world where coal is the dominated source of energy for consumption. This study highlights the behavior of calcium transformation as mineral grains verses organically associated inorganic components during pulverized coal combustion. The influence of existing type of calcium on the coarse, fine and ultrafine mode formation mechanisms is also presented. The impact of two sub-bituminous coals on particle size and calcium composition evolution during combustion is to be assessed. Three mixed blends named Blends 1, 2, and 3 are selected according to the ration of coal A to coal B by weight. Calcium percentage in original coal increases as going from Blend 1 to 3. A mathematical model and a new approach of describing constituent distribution are proposed. Analysis of experiments of calcium distribution in ash is also modeled using Poisson distribution. A novel parameter, called elemental index λ, is introduced as a measuring factor of element distribution. Results show that calcium in ash that originally in coal as mineral grains has index of 17, whereas organically associated calcium transformed to fly ash shown to be best described when elemental index λ is 7. As an alkaline-earth element, calcium is considered the fundamental element responsible for boiler deficiency since it is the major player in the mechanism of ash slagging process. The mechanism of particle size distribution and mineral species of ash particles are presented using CCSEM and size-segregated ash characteristics. Conclusions are drawn from the analysis of pulverized coal ash generated from a utility-scale boiler.Keywords: coal combustion, inorganic element, calcium evolution, fluid dynamics
Procedia PDF Downloads 3343678 Tool Wear of Metal Matrix Composite 10wt% AlN Reinforcement Using TiB2 Cutting Tool
Authors: M. S. Said, J. A. Ghani, C. H. Che Hassan, N. N. Wan, M. A. Selamat, R. Othman
Abstract:
Metal Matrix Composite (MMCs) have attracted considerable attention as a result of their ability to provide high strength, high modulus, high toughness, high impact properties, improved wear resistance and good corrosion resistance than unreinforced alloy. Aluminium Silicon (Al/Si) alloys Metal Matrix composite (MMC) has been widely used in various industrial sectors such as transportation, domestic equipment, aerospace, military, construction, etc. Aluminium silicon alloy is MMC reinforced with aluminium nitride (AlN) particle and becomes a new generation material for automotive and aerospace applications. The AlN material is one of the advanced materials with light weight, high strength, high hardness and stiffness qualities which have good future prospects. However, the high degree of ceramic particles reinforcement and the irregular nature of the particles along the matrix material that contribute to its low density, is the main problem that leads to the machining difficulties. This paper examines tool wear when milling AlSi/AlN Metal Matrix Composite using a TiB2 coated carbide cutting tool. The volume of the AlN reinforced particle was 10%. The milling process was carried out under dry cutting condition. The TiB2 coated carbide insert parameters used were the cutting speed of (230 m/min, feed rate 0.4mm tooth, DOC 0.5mm, 300 m/min, feed rate 0.8mm/tooth, DOC 0.5mm and 370 m/min, feed rate 0.8, DOC 0.4m). The Sometech SV-35 video microscope system was used for tool wear measurements respectively. The results have revealed that the tool life increases with the cutting speed (370 m/min, feed rate 0.8 mm/tooth and depth of cut 0.4mm) constituted the optimum condition for longer tool life which is 123.2 min. While at medium cutting speed, it is found that the cutting speed of 300m/min, feed rate 0.8 mm/tooth and depth of cut 0.5mm only 119.86 min for tool wear mean while the low cutting speed give 119.66 min. The high cutting speed gives the best parameter for cutting AlSi/AlN MMCs materials. The result will help manufacture to machining the AlSi/AlN MMCs materials.Keywords: AlSi/AlN Metal Matrix Composite milling process, tool wear, TiB2 coated carbide tool, manufacturing engineering
Procedia PDF Downloads 4243677 Patient Scheduling Improvement in a Cancer Treatment Clinic Using Optimization Techniques
Authors: Maryam Haghi, Ivan Contreras, Nadia Bhuiyan
Abstract:
Chemotherapy is one of the most popular and effective cancer treatments offered to patients in outpatient oncology centers. In such clinics, patients first consult with an oncologist and the oncologist may prescribe a chemotherapy treatment plan for the patient based on the blood test results and the examination of the health status. Then, when the plan is determined, a set of chemotherapy and consultation appointments should be scheduled for the patient. In this work, a comprehensive mathematical formulation for planning and scheduling different types of chemotherapy patients over a planning horizon considering blood test, consultation, pharmacy and treatment stages has been proposed. To be more realistic and to provide an applicable model, this study is focused on a case study related to a major outpatient cancer treatment clinic in Montreal, Canada. Comparing the results of the proposed model with the current practice of the clinic under study shows significant improvements regarding different performance measures. These major improvements in the patients’ schedules reveal that using optimization techniques in planning and scheduling of patients in such highly demanded cancer treatment clinics is an essential step to provide a good coordination between different involved stages which ultimately increases the efficiency of the entire system and promotes the staff and patients' satisfaction.Keywords: chemotherapy patients scheduling, integer programming, integrated scheduling, staff balancing
Procedia PDF Downloads 1743676 Using Photogrammetric Techniques to Map the Mars Surface
Authors: Ahmed Elaksher, Islam Omar
Abstract:
For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.Keywords: mars, photogrammetry, MOLA, HiRISE
Procedia PDF Downloads 563675 A Model for Analysis the Induced Voltage of 115 kV On-Line Acting on Neighboring 22 kV Off-Line
Authors: Sakhon Woothipatanapan, Surasit Prakobkit
Abstract:
This paper presents a model for analysis the induced voltage of transmission lines (energized) acting on neighboring distribution lines (de-energized). From environmental restrictions, 22 kV distribution lines need to be installed under 115 kV transmission lines. With the installation of the two parallel circuits like this, they make the induced voltage which can cause harm to operators. This work was performed with the ATP-EMTP modeling to analyze such phenomenon before field testing. Simulation results are used to find solutions to prevent danger to operators who are on the pole.Keywords: transmission system, distribution system, induced voltage, off-line operation
Procedia PDF Downloads 6053674 Optimal Geothermal Borehole Design Guided By Dynamic Modeling
Authors: Hongshan Guo
Abstract:
Ground-source heat pumps provide stable and reliable heating and cooling when designed properly. The confounding effect of the borehole depth for a GSHP system, however, is rarely taken into account for any optimization: the determination of the borehole depth usually comes prior to the selection of corresponding system components and thereafter any optimization of the GSHP system. The depth of the borehole is important to any GSHP system because the shallower the borehole, the larger the fluctuation of temperature of the near-borehole soil temperature. This could lead to fluctuations of the coefficient of performance (COP) for the GSHP system in the long term when the heating/cooling demand is large. Yet the deeper the boreholes are drilled, the more the drilling cost and the operational expenses for the circulation. A controller that reads different building load profiles, optimizing for the smallest costs and temperature fluctuation at the borehole wall, eventually providing borehole depth as the output is developed. Due to the nature of the nonlinear dynamic nature of the GSHP system, it was found that between conventional optimal controller problem and model predictive control problem, the latter was found to be more feasible due to a possible history of both the trajectory during the iteration as well as the final output could be computed and compared against. Aside from a few scenarios of different weighting factors, the resulting system costs were verified with literature and reports and were found to be relatively accurate, while the temperature fluctuation at the borehole wall was also found to be within acceptable range. It was therefore determined that the MPC is adequate to optimize for the investment as well as the system performance for various outputs.Keywords: geothermal borehole, MPC, dynamic modeling, simulation
Procedia PDF Downloads 2863673 Multi Objective Simultaneous Assembly Line Balancing and Buffer Sizing
Authors: Saif Ullah, Guan Zailin, Xu Xianhao, He Zongdong, Wang Baoxi
Abstract:
Assembly line balancing problem is aimed to divide the tasks among the stations in assembly lines and optimize some objectives. In assembly lines the workload on stations is different from each other due to different tasks times and the difference in workloads between stations can cause blockage or starvation in some stations in assembly lines. Buffers are used to store the semi-finished parts between the stations and can help to smooth the assembly production. The assembly line balancing and buffer sizing problem can affect the throughput of the assembly lines. Assembly line balancing and buffer sizing problems have been studied separately in literature and due to their collective contribution in throughput rate of assembly lines, balancing and buffer sizing problem are desired to study simultaneously and therefore they are considered concurrently in current research. Current research is aimed to maximize throughput, minimize total size of buffers in assembly line and minimize workload variations in assembly line simultaneously. A multi objective optimization objective is designed which can give better Pareto solutions from the Pareto front and a simple example problem is solved for assembly line balancing and buffer sizing simultaneously. Current research is significant for assembly line balancing research and it can be significant to introduce optimization approaches which can optimize current multi objective problem in future.Keywords: assembly line balancing, buffer sizing, Pareto solutions
Procedia PDF Downloads 4873672 Feasibility of Washing/Extraction Treatment for the Remediation of Deep-Sea Mining Trailings
Authors: Kyoungrean Kim
Abstract:
Importance of deep-sea mineral resources is dramatically increasing due to the depletion of land mineral resources corresponding to increasing human’s economic activities. Korea has acquired exclusive exploration licenses at four areas which are the Clarion-Clipperton Fracture Zone in the Pacific Ocean (2002), Tonga (2008), Fiji (2011) and Indian Ocean (2014). The preparation for commercial mining of Nautilus minerals (Canada) and Lockheed martin minerals (USA) is expected by 2020. The London Protocol 1996 (LP) under International Maritime Organization (IMO) and International Seabed Authority (ISA) will set environmental guidelines for deep-sea mining until 2020, to protect marine environment. In this research, the applicability of washing/extraction treatment for the remediation of deep-sea mining tailings was mainly evaluated in order to present preliminary data to develop practical remediation technology in near future. Polymetallic nodule samples were collected at the Clarion-Clipperton Fracture Zone in the Pacific Ocean, then stored at room temperature. Samples were pulverized by using jaw crusher and ball mill then, classified into 3 particle sizes (> 63 µm, 63-20 µm, < 20 µm) by using vibratory sieve shakers (Analysette 3 Pro, Fritsch, Germany) with 63 µm and 20 µm sieve. Only the particle size 63-20 µm was used as the samples for investigation considering the lower limit of ore dressing process which is tens to 100 µm. Rhamnolipid and sodium alginate as biosurfactant and aluminum sulfate which are mainly used as flocculant were used as environmentally friendly additives. Samples were adjusted to 2% liquid with deionized water then mixed with various concentrations of additives. The mixture was stirred with a magnetic bar during specific reaction times and then the liquid phase was separated by a centrifugal separator (Thermo Fisher Scientific, USA) under 4,000 rpm for 1 h. The separated liquid was filtered with a syringe and acrylic-based filter (0.45 µm). The extracted heavy metals in the filtered liquid were then determined using a UV-Vis spectrometer (DR-5000, Hach, USA) and a heat block (DBR 200, Hach, USA) followed by US EPA methods (8506, 8009, 10217 and 10220). Polymetallic nodule was mainly composed of manganese (27%), iron (8%), nickel (1.4%), cupper (1.3 %), cobalt (1.3%) and molybdenum (0.04%). Based on remediation standards of various countries, Nickel (Ni), Copper (Cu), Cadmium (Cd) and Zinc (Zn) were selected as primary target materials. Throughout this research, the use of rhamnolipid was shown to be an effective approach for removing heavy metals in samples originated from manganese nodules. Sodium alginate might also be one of the effective additives for the remediation of deep-sea mining tailings such as polymetallic nodules. Compare to the use of rhamnolipid and sodium alginate, aluminum sulfate was more effective additive at short reaction time within 4 h. Based on these results, sequencing particle separation, selective extraction/washing, advanced filtration of liquid phase, water treatment without dewatering and solidification/stabilization may be considered as candidate technologies for the remediation of deep-sea mining tailings.Keywords: deep-sea mining tailings, heavy metals, remediation, extraction, additives
Procedia PDF Downloads 1543671 Approximate Spring Balancing for Swimming Pool Lift Mechanism to Reduce Actuator Torque
Authors: Apurva Patil, Sujatha Srinivasan
Abstract:
Reducing actuator loads is important for applications in which human effort is required for actuation. The potential benefit of applying spring balancing to rehabilitation devices which work against gravity on a nonhorizontal plane is well recognized, but practical applications have been elusive. Although existing methods provide exact spring balance, they require additional masses or auxiliary links, or all the springs used originate from the ground, which makes the resulting device bulky and space-inefficient. This paper uses a method of static balancing of mechanisms with conservative loads such as gravity and spring loads using non-zero-free-length springs and no auxiliary links. Application of this method to a manually operated swimming pool lift mechanism which lowers and raises the physically challenged users into or out of the swimming pool is presented here. Various possible configurations using extension and compression springs as well as gas spring in the mechanism are compared. This work involves approximate spring balancing of the mechanism using minimization of potential energy variance. It uses the approach of flattening the potential energy distribution over the workspace and fuses it with numerical optimization. The results show the considerable reduction in actuator torque requirement with practical spring design and arrangement. Although the method provides only an approximate balancing, it is versatile, flexible in choosing appropriate control variables that are relevant to the design problem and easy to implement. The true potential of this technique lies in the fact that it uses a very simple optimization to find the spring constant, free length of the spring and the optimal attachment points subject to the optimization constraints. Also, it uses physically realizable non-zero-free-length springs directly, thereby reducing the complexity involved in simulating zero-free-length springs from non-zero-free-length springs. This method allows springs to be attached inside the mechanism, which makes the implementation of spring balancing practical. Because auxiliary linkages can be avoided, the resultant swimming pool lift mechanism is compact. The cost benefits and reduced complexity can be significant advantages in the development of this user-actuated swimming pool lift for developing countries.Keywords: gas spring, rehabilitation device, spring balancing, swimming pool lift
Procedia PDF Downloads 2413670 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition
Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can
Abstract:
To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning
Procedia PDF Downloads 833669 Developing a Comprehensive Green Building Rating System Tailored for Nigeria: Analyzing International Sustainable Rating Systems to Create Environmentally Responsible Standards for the Nigerian Construction Industry and Built Environment
Authors: Azeez Balogun
Abstract:
Inexperienced building score practices are continually evolving and vary across areas. Yet, a few middle ideas stay steady, such as website selection, design, energy efficiency, water and material conservation, indoor environmental great, operational optimization, and waste discount. The essence of green building lies inside the optimization of 1 or more of those standards. This paper conducts a comparative analysis of 7 extensively recognized sustainable score structures—BREEAM, CASBEE, green GLOBES, inexperienced superstar, HK-BEAM, IGBC green homes, and LEED—based totally on the perceptions and opinions of stakeholders in Nigeria certified in green constructing rating systems. The purpose is to pick out and adopt an appropriate green building rating device for Nigeria. Numerous components of those systems had been tested to determine the high-quality health of the Nigerian built environment. The findings imply that LEED, the important machine within the USA and Canada, is the most suitable for Nigeria due to its sturdy basis, extensive funding, and confirmed blessings. LEED obtained the highest rating of eighty out of one hundred points on this assessment.Keywords: structure, built surroundings, inexperienced building score gadget, Nigeria Inexperienced Constructing Council, sustainability
Procedia PDF Downloads 263668 i2kit: A Tool for Immutable Infrastructure Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.Keywords: container, deployment, immutable infrastructure, microservice
Procedia PDF Downloads 1773667 Adsorption: A Decision Maker in the Photocatalytic Degradation of Phenol on Co-Catalysts Doped TiO₂
Authors: Dileep Maarisetty, Janaki Komandur, Saroj S. Baral
Abstract:
In the current work, photocatalytic degradation of phenol was carried both in UV and visible light to find the slowest step that is limiting the rate of photo-degradation process. Characterization such as XRD, SEM, FT-IR, TEM, XPS, UV-DRS, PL, BET, UPS, ESR and zeta potential experiments were conducted to assess the credibility of catalysts in boosting the photocatalytic activity. To explore the synergy, TiO₂ was doped with graphene and alumina. The orbital hybridization with alumina doping (mediated by graphene) resulted in higher electron transfer from the conduction band of TiO₂ to alumina surface where oxygen reduction reactions (ORR) occur. Besides, the doping of alumina and graphene introduced defects into Ti lattice and helped in improving the adsorptive properties of modified photo-catalyst. Results showed that these defects promoted the oxygen reduction reactions (ORR) on the catalyst’s surface. ORR activity aims at producing reactive oxygen species (ROS). These ROS species oxidizes the phenol molecules which is adsorbed on the surface of photo-catalysts, thereby driving the photocatalytic reactions. Since mass transfer is considered as rate limiting step, various mathematical models were applied to the experimental data to probe the best fit. By varying the parameters, it was found that intra-particle diffusion was the slowest step in the degradation process. Lagergren model gave the best R² values indicating the nature of rate kinetics. Similarly, different adsorption isotherms were employed and realized that Langmuir isotherm suits the best with tremendous increase in uptake capacity (mg/g) of TiO₂-rGO-Al₂O₃ as compared undoped TiO₂. This further assisted in higher adsorption of phenol molecules. The results obtained from experimental, kinetic modelling and adsorption isotherms; it is concluded that apart from changes in surface, optoelectronic and morphological properties that enhanced the photocatalytic activity, the intra-particle diffusion within the catalyst’s pores serve as rate-limiting step in deciding the fate of photo-catalytic degradation of phenol.Keywords: ORR, phenol degradation, photo-catalyst, rate kinetics
Procedia PDF Downloads 1433666 Application of Bacteriophage and Essential Oil to Enhance Photocatalytic Efficiency
Authors: Myriam Ben Said, Dhekra Trabelsi, Faouzi Achouri, Marwa Ben Saad, Latifa Bousselmi, Ahmed Ghrabi
Abstract:
This present study suggests the use of biological and natural bactericide, cheap, safe to handle, natural, environmentally benign agents to enhance the conventional wastewater treatment process. In the same sense, to highlight the enhancement of wastewater photocatalytic treatability, we were used virulent bacteriophage(s) and essential oils (EOs). The pre-phago-treatment of wastewater with lytic phage(s), leads to a decrease in bacterial density and, consequently, limits the establishment of intercellular communication (QS), thus preventing biofilm formation and inhibiting the expression of other virulence factors after photocatalysis. Moreover, to increase the photocatalytic efficiency, we were added to the secondary treated wastewater 1/1000 (w/v) of EO of thyme (T. vulgaris). This EO showed in vitro an anti-biofilm activity through the inhibition of plonctonic cell mobility and their attachment on an inert surface and also the deterioration of the sessile structure. The presence of photoactivatable molecules (photosensitizes) in this type of oil allows the optimization of photocatalytic efficiency without hazards relayed to dyes and chemicals reagent. The use of ‘biological and natural tools’ in combination with usual water treatment process can be considered as a safety procedure to reduce and/or to prevent the recontamination of treated water and also to prevent the re-expression of virulent factors by pathogenic bacteria such as biofilm formation with friendly processes.Keywords: biofilm, essential oil, optimization, phage, photocatalysis, wastewater
Procedia PDF Downloads 1523665 On the Approximate Solution of Continuous Coefficients for Solving Third Order Ordinary Differential Equations
Authors: A. M. Sagir
Abstract:
This paper derived four newly schemes which are combined in order to form an accurate and efficient block method for parallel or sequential solution of third order ordinary differential equations of the form y^'''= f(x,y,y^',y^'' ), y(α)=y_0,〖y〗^' (α)=β,y^('' ) (α)=μ with associated initial or boundary conditions. The implementation strategies of the derived method have shown that the block method is found to be consistent, zero stable and hence convergent. The derived schemes were tested on stiff and non-stiff ordinary differential equations, and the numerical results obtained compared favorably with the exact solution.Keywords: block method, hybrid, linear multistep, self-starting, third order ordinary differential equations
Procedia PDF Downloads 2693664 Development of a PJWF Cleaning Method for Wet Electrostatic Precipitators
Authors: Hsueh-Hsing Lu, Thi-Cuc Le, Tung-Sheng Tsai, Chuen-Jinn Tsai
Abstract:
This study designed and tested a novel wet electrostatic precipitators (WEP) system featuring a Pulse-Air-Jet-Assisted Water Flow (PJWF) to shorten water cleaning time, reduce water usage, and maintain high particle removal efficiency. The PJWF injected cleaning water tangentially at the cylinder wall, rapidly enhancing the momentum of the water flow for efficient dust cake removal. Each PJWF cycle uses approximately 4.8 liters of cleaning water in 18 seconds. Comprehensive laboratory tests were conducted using a single-tube WEP prototype within a flow rate range of 3.0 to 6.0 cubic meters per minute(CMM), operating voltages between -35 to -55 kV, and high-frequency power supply. The prototype, consisting of 72 sets of double-spike rigid discharge electrodes, demonstrated that with the PJWF, -35 kV, and 3.0 CMM, the PM2.5 collection efficiency remained as high as the initial value of 88.02±0.92% after loading with Al2O3 particles at 35.75± 2.54 mg/Nm3 for 20-hr continuous operation. In contrast, without the PJWF, the PM2.5 collection efficiency drastically dropped from 87.4% to 53.5%. Theoretical modeling closely matched experimental results, confirming the robustness of the system's design and its scalability for larger industrial applications. Future research will focus on optimizing the PJWF system, exploring its performance with various particulate matter, and ensuring long-term operational stability and reliability under diverse environmental conditions. Recently, this WEP was combined with a preceding CT (cooling tower) and a HWS (honeycomb wet scrubber) and pilot-tested (40 CMM) to remove SO2 and PM2.5 emissions in a sintering plant of an integrated steel making plant. Pilot-test results showed that the removal efficiencies for SO2 and PM2.5 emissions are as high as 99.7 and 99.3 %, respectively, with ultralow emitted concentrations of 0.3 ppm and 0.07 mg/m3, respectively, while the white smoke is also eliminated at the same time. These new technologies are being used in the industry and the application in different fields is expected to be expanded to reduce air pollutant emissions substantially for a better ambient air quality.Keywords: wet electrostatic precipitator, pulse-air-jet-assisted water flow, particle removal efficiency, air pollution control
Procedia PDF Downloads 193663 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model
Authors: Nicolae Bold, Daniel Nijloveanu
Abstract:
The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.Keywords: chromosomes, cropping, genetic algorithm, genes
Procedia PDF Downloads 4263662 Electromagnetic Simulation Based on Drift and Diffusion Currents for Real-Time Systems
Authors: Alexander Norbach
Abstract:
The script in this paper describes the use of advanced simulation environment using electronic systems (Microcontroller, Operational Amplifiers, and FPGA). The simulation may be used for all dynamic systems with the diffusion and the ionisation behaviour also. By additionally required observer structure, the system works with parallel real-time simulation based on diffusion model and the state-space representation for other dynamics. The proposed deposited model may be used for electrodynamic effects, including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time. For further purpose, the spatial temperature distribution may be used also. With upon system, the uncertainties, unknown initial states and disturbances may be determined. This provides the estimation of the more precise system states for the required system, and additionally, the estimation of the ionising disturbances that occur due to radiation effects. The results have shown that a system can be also developed and adopted specifically for space systems with the real-time calculation of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. In order to be able to react to these processes, it must be calculated within a shorter time that ionising radiation and dose is present. All available sensors shall be used to observe the spatial distributions. By measured value of size and known location of the sensors, the entire distribution can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of kind of systems space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms. For the modelling and derivation of equations, the extended current equation is used. The size K represents the proposed charge density drifting vector. The extended diffusion equation was derived and shows the quantising character and has similar law like the Klein-Gordon equation. These kinds of PDE's (Partial Differential Equations) are analytically solvable by giving initial distribution conditions (Cauchy problem) and boundary conditions (Dirichlet boundary condition). For a simpler structure, a transfer function for B- and E- fields was analytically calculated. With known discretised responses g₁(k·Ts) and g₂(k·Ts), the electric current or voltage may be calculated using a convolution; g₁ is the direct function and g₂ is a recursive function. The analytical results are good enough for calculation of fields with diffusion effects. Within the scope of this work, a proposed model of the consideration of the electromagnetic diffusion effects of arbitrary current 'waveforms' has been developed. The advantage of the proposed calculation of diffusion is the real-time capability, which is not really possible with the FEM programs available today. It makes sense in the further course of research to use these methods and to investigate them thoroughly.Keywords: advanced observer, electrodynamics, systems, diffusion, partial differential equations, solver
Procedia PDF Downloads 1293661 Agent-Based Modeling of Pedestrian Corridor Congestion on the Characteristics of Physical Space Form
Abstract:
The pedestrian corridor is the most crowded area in the public space. The crowded severity has been focused on the field of evacuation strategies of the entrance in large public spaces. The aim of this paper is to analyze the walking efficiency in different spaces of pedestrian corridor with the variation of spatial parameters. The congestion condition caused by the variation of walking efficiency is modeled as well. This study established the space model of the walking corridor by setting the width, slope, turning form and turning angle of the pedestrian corridor. The pedestrian preference of walking mode varied with the difference of the crowded severity, walking speed, field of vision, sight direction and the expected destination, which is influenced by the characters of physical space form. Swarm software is applied to build Agent model. According to the output of the Agent model, the relationship between the pedestrian corridor width, ground slope, turning forms, turning angle and the walking efficiency, crowded severity is acquired. The results of the simulation can be applied to pedestrian corridor design in order to reduce the crowded severity and the potential safety risks caused by crowded people.Keywords: crowded severity, multi-agent, pedestrian preference, urban space design
Procedia PDF Downloads 217