Search results for: initial peak load
4934 Analysis of Bridge-Pile Foundation System in Multi-layered Non-Linear Soil Strata Using Energy-Based Method
Authors: Arvan Prakash Ankitha, Madasamy Arockiasamy
Abstract:
The increasing demand for adopting pile foundations in bridgeshas pointed towardsthe need to constantly improve the existing analytical techniques for better understanding of the behavior of such foundation systems. This study presents a simplistic approach using the energy-based method to assess the displacement responses of piles subjected to general loading conditions: Axial Load, Lateral Load, and a Bending Moment. The governing differential equations and the boundary conditions for a bridge pile embedded in multi-layered soil strata subjected to the general loading conditions are obtained using the Hamilton’s principle employing variational principles and minimization of energies. The soil non-linearity has been incorporated through simple constitutive relationships that account for degradation of soil moduli with increasing strain values.A simple power law based on published literature is used where the soil is assumed to be nonlinear-elastic and perfectly plastic. A Tresca yield surface is assumed to develop the soil stiffness variation with different strain levels that defines the non-linearity of the soil strata. This numerical technique has been applied to a pile foundation in a two - layered soil strata for a pier supporting the bridge and solved using the software MATLAB R2019a. The analysis yields the bridge pile displacements at any depth along the length of the pile. The results of the analysis are in good agreement with the published field data and the three-dimensional finite element analysis results performed using the software ANSYS 2019R3. The methodology can be extended to study the response of the multi-strata soil supporting group piles underneath the bridge piers.Keywords: pile foundations, deep foundations, multilayer soil strata, energy based method
Procedia PDF Downloads 1394933 Turbulent Boundary Layer over 3D Sinusoidal Roughness
Authors: Misarah Abdelaziz, L Djenidi, Mergen H. Ghayesh, Rey Chin
Abstract:
Measurements of a turbulent boundary layer over 3D sinusoidal roughness are performed for friction Reynolds numbers ranging from 650 < Reτ < 2700. This surface was fabricated by a Multicam CNC Router machine of an acrylic sheet to have an amplitude of k/2 = 0.8 mm and an equal wavelength of 8k in both streamwise and spanwise directions, a 0.6 mm stepover and 12 mm ball nose cutter was used. Single hotwire anemometry measurements are done at one location x=1.5 m downstream at different freestream velocities under zero-pressure gradient conditions. As expected, the roughness causes a downward shift on the wall-unit normalised streamwise mean velocity profile when compared to the smooth wall profile. The shift is increasing with increasing Reτ, 1.8 < ∆U+ < 6.2. The coefficient of friction is almost constant at all cases Cf = 0.0042 ± 0.0002. The results show a gradual reduction in the inner peak of profiles with increasing Reτ until fully destruction at Reτ of 2700.Keywords: hotwire, roughness, TBL, ZPG
Procedia PDF Downloads 2174932 Model-Based Global Maximum Power Point Tracking at Photovoltaic String under Partial Shading Conditions Using Multi-Input Interleaved Boost DC-DC Converter
Authors: Seyed Hossein Hosseini, Seyed Majid Hashemzadeh
Abstract:
Solar energy is one of the remarkable renewable energy sources that have particular characteristics such as unlimited, no environmental pollution, and free access. Generally, solar energy can be used in thermal and photovoltaic (PV) types. The cost of installation of the PV system is very high. Additionally, due to dependence on environmental situations such as solar radiation and ambient temperature, electrical power generation of this system is unpredictable and without power electronics devices, there is no guarantee to maximum power delivery at the output of this system. Maximum power point tracking (MPPT) should be used to achieve the maximum power of a PV string. MPPT is one of the essential parts of the PV system which without this section, it would be impossible to reach the maximum amount of the PV string power and high losses are caused in the PV system. One of the noticeable challenges in the problem of MPPT is the partial shading conditions (PSC). In PSC, the output photocurrent of the PV module under the shadow is less than the PV string current. The difference between the mentioned currents passes from the module's internal parallel resistance and creates a large negative voltage across shaded modules. This significant negative voltage damages the PV module under the shadow. This condition is called hot-spot phenomenon. An anti-paralleled diode is inserted across the PV module to prevent the happening of this phenomenon. This diode is known as the bypass diode. Due to the performance of the bypass diode under PSC, the P-V curve of the PV string has several peaks. One of the P-V curve peaks that makes the maximum available power is the global peak. Model-based Global MPPT (GMPPT) methods can estimate the optimal point with higher speed than other GMPPT approaches. Centralized, modular, and interleaved DC-DC converter topologies are the significant structures that can be used for GMPPT at a PV string. there are some problems in the centralized structure such as current mismatch losses at PV sting, loss of power of the shaded modules because of bypassing by bypass diodes under PSC, needing to series connection of many PV modules to reach the desired voltage level. In the modular structure, each PV module is connected to a DC-DC converter. In this structure, by increasing the amount of demanded power from the PV string, the number of DC-DC converters that are used at the PV system will increase. As a result, the cost of the modular structure is very high. We can implement the model-based GMPPT through the multi-input interleaved boost DC-DC converter to increase the power extraction from the PV string and reduce hot-spot and current mismatch error in a PV string under different environmental condition and variable load circumstances. The interleaved boost DC-DC converter has many privileges than other mentioned structures, such as high reliability and efficiency, better regulation of DC voltage at DC link, overcome the notable errors such as module's current mismatch and hot spot phenomenon, and power switches voltage stress reduction.Keywords: solar energy, photovoltaic systems, interleaved boost converter, maximum power point tracking, model-based method, partial shading conditions
Procedia PDF Downloads 1284931 Centralized Peak Consumption Smoothing Revisited for Habitat Energy Scheduling
Authors: M. Benbouzid, Q. Bresson, A. Duclos, K. Longo, Q. Morel
Abstract:
Currently, electricity suppliers must predict the consumption of their customers in order to deduce the power they need to produce. It is, then, important in a first step to optimize household consumption to obtain more constant curves by limiting peaks in energy consumption. Here centralized real time scheduling is proposed to manage the equipment's starting in parallel. The aim is not to exceed a certain limit while optimizing the power consumption across a habitat. The Raspberry Pi is used as a box; this scheduler interacts with the various sensors in 6LoWPAN. At the scale of a single dwelling, household consumption decreases, particularly at times corresponding to the peaks. However, it would be wiser to consider the use of a residential complex so that the result would be more significant. So, the ceiling would no longer be fixed. The scheduling would be done on two scales, firstly, per dwelling, and secondly, at the level of a residential complex.Keywords: smart grid, energy box, scheduling, Gang Model, energy consumption, energy management system, wireless sensor network
Procedia PDF Downloads 3124930 Uncontrollable Inaccuracy in Inverse Problems
Authors: Yu Menshikov
Abstract:
In this paper the influence of errors of function derivatives in initial time which have been obtained by experiment (uncontrollable inaccuracy) to the results of inverse problem solution was investigated. It was shown that these errors distort the inverse problem solution as a rule near the beginning of interval where the solution are analyzed. Several methods for remove the influence of uncontrollable inaccuracy have been suggested.Keywords: inverse problems, filtration, uncontrollable inaccuracy
Procedia PDF Downloads 5024929 Experimental Study Analysis of Flow over Pickup Truck’s Cargo Area Using Bed Covers
Authors: Jonathan Rodriguez, Dominga Guerrero, Surupa Shaw
Abstract:
Automobiles are modeled in various forms, and they interact with air when in motion. Aerodynamics is the study of such interactions where solid bodies affect the way air moves around them. The shape of solid bodies can impact the ease at which they move against the flow of air; due to which any additional freightage, or loads, impact its aerodynamics. It is important to transport people and cargo safely. Despite the various safety measures, there are a large number of vehicle-related accidents. This study precisely explores the effects an automobile experiences, with added cargo and covers. The addition of these items changes the original vehicle shape and the approved design for safe driving. This paper showcases the effects of the changed vehicle shape and design via experimental testing conducted on a physical 1:27 scale and CAD model of an F-150 pickup truck, the most common pickup truck in the United States, with differently shaped loads and weight traveling at a constant speed. The additional freightage produces unwanted drag or lift resulting in lower fuel efficiencies and unsafe driving conditions. This study employs an adjustable external shell on the F-150 pickup truck to create a controlled aerodynamic geometry to combat the detrimental effects of additional freightage. The results utilize colored powder [ which acts as a visual medium for the interaction of air with the vehicle], to highlight the impact of the additional freight on the automobile’s external shell. This will be done along with simulation models using Altair CFD software of twelve cases regarding the effects of an added load onto an F-150 pickup truck. This paper is an attempt toward standardizing the geometric design of the external shell, given the uniqueness of every load and its placement on the vehicle; while providing real-time data to be compared to simulation results from the existing literature.Keywords: aerodynamics, CFD, freightage, pickup cover
Procedia PDF Downloads 1664928 A Data-Driven Agent Based Model for the Italian Economy
Authors: Michele Catalano, Jacopo Di Domenico, Luca Riccetti, Andrea Teglio
Abstract:
We develop a data-driven agent based model (ABM) for the Italian economy. We calibrate the model for the initial condition and parameters. As a preliminary step, we replicate the Monte-Carlo simulation for the Austrian economy. Then, we evaluate the dynamic properties of the model: the long-run equilibrium and the allocative efficiency in terms of disequilibrium patterns arising in the search and matching process for final goods, capital, intermediate goods, and credit markets. In this perspective, we use a randomized initial condition approach. We perform a robustness analysis perturbing the system for different parameter setups. We explore the empirical properties of the model using a rolling window forecast exercise from 2010 to 2022 to observe the model’s forecasting ability in the wake of the COVID-19 pandemic. We perform an analysis of the properties of the model with a different number of agents, that is, with different scales of the model compared to the real economy. The model generally displays transient dynamics that properly fit macroeconomic data regarding forecasting ability. We stress the model with a large set of shocks, namely interest policy, fiscal policy, and exogenous factors, such as external foreign demand for export. In this way, we can explore the most exposed sectors of the economy. Finally, we modify the technology mix of the various sectors and, consequently, the underlying input-output sectoral interdependence to stress the economy and observe the long-run projections. In this way, we can include in the model the generation of endogenous crisis due to the implied structural change, technological unemployment, and potential lack of aggregate demand creating the condition for cyclical endogenous crises reproduced in this artificial economy.Keywords: agent-based models, behavioral macro, macroeconomic forecasting, micro data
Procedia PDF Downloads 694927 Household Energy Usage in Nigeria: Emerging Advances for Sustainable Development
Authors: O. A. Akinsanya
Abstract:
This paper presents the emerging trends in household energy usage in Nigeria for sustainable development. The paper relied on a direct appraisal of energy use in the residential sector and the use of a structured questionnaire to establish the usage pattern, energy management measures and emerging advances. The use of efficient appliances, retrofitting, smart building and smart attitude are some of the benefitting measures. The paper also identified smart building, prosumer activities, hybrid energy use, improved awareness, and solar stand-alone street/security lights as the trend and concluded that energy management strategies would result in a significant reduction in the monthly bills and peak loads as well as the total electricity consumption in Nigeria and therefore it is good for sustainable development.Keywords: household, energy, trends, strategy, sustainable, Nigeria
Procedia PDF Downloads 654926 A Multimodal Measurement Approach Using Narratives and Eye Tracking to Investigate Visual Behaviour in Perceiving Naturalistic and Urban Environments
Authors: Khizar Z. Choudhrya, Richard Coles, Salman Qureshi, Robert Ashford, Salim Khan, Rabia R. Mir
Abstract:
Abstract: The majority of existing landscape research has been derived by conducting heuristic evaluations, without having empirical insight of real participant visual response. In this research, a modern multimodal measurement approach (using narratives and eye tracking) was applied to investigate visual behaviour in perceiving naturalistic and urban environments. This research is unique in exploring gaze behaviour on environmental images possessing different levels of saliency. Eye behaviour is predominantly attracted by salient locations. The concept of methodology of this research on naturalistic and urban environments is drawn from the approaches in market research. Borrowing methodologies from market research that examine visual responses and qualities provided a critical and hitherto unexplored approach. This research has been conducted by using mixed methodological quantitative and qualitative approaches. On the whole, the results of this research corroborated existing landscape research findings, but they also identified potential refinements. The research contributes both methodologically and empirically to human-environment interaction (HEI). This study focused on initial impressions of environmental images with the help of eye tracking. Taking under consideration the importance of the image, this study explored the factors that influence initial fixations in relation to expectations and preferences. In terms of key findings of this research it is noticed that each participant has his own unique navigation style while surfing through different elements of landscape images. This individual navigation style is given the name of ‘visual signature’. This study adds the necessary clarity that would complete the picture and bring an insight for future landscape researchers.Keywords: human-environment interaction (HEI), multimodal measurement, narratives, eye tracking
Procedia PDF Downloads 3374925 Concrete Mixes for Sustainability
Authors: Kristyna Hrabova, Sabina Hüblova, Tomas Vymazal
Abstract:
Structural design of concrete structure has the result in qualities of structural safety and serviceability, together with durability, robustness, sustainability and resilience. A sustainable approach is at the heart of the research agenda around the world, and the Fibrillation Commission is also working on a new model code 2020. Now it is clear that the effects of mechanical, environmental load and even social coherence need to be reflected and included in the designing and evaluating structures. This study aimed to present the methodology for the sustainability assessment of various concrete mixtures.Keywords: concrete, cement, sustainability, Model Code 2020
Procedia PDF Downloads 1764924 Investigation Into the Effects of Egg Shells Powder and Groundnut Husk Ash on the Properties of Concrete
Authors: Usman B.M., Basheer O. B., . Ahmed A., Amali N. U., Taufeeq O.
Abstract:
This study presents an investigation into the improvement of strength properties of concrete using egg shell powder (ESP) and groundnut husk ash (GHA) as additives so as to reduce its high cost and find alternative disposal method for agricultural waste. A standard consistency test was carried out on the egg shell powder and groundnut husk ash. A prescribed concrete mix ratio of 1:2:4 concrete cubes (150mm by 150mm) and water-cement ratio of 0.6 were casted. A total of One hundred and forty four (144) cubes were cast and cured for 3, 7 and 28 days and compressive strength subsequently determined in comparison with the relevant specifications. Consistency test on the cement paste at the various concentrations exhibited an increase in the setting time as the concentration increases with the highest value recorded at 5% egg shell powder and groundnut husk ash concentration as 219 minutes for the initial setting time and 275 minutes for the final setting time as against the control specimen of 159 minutes and 234 minutes for both initial and final setting times respectively. The results of the investigations showed that GHA was predominantly of Silicon oxide (56.73%) and a combined SiO₂, Al₂O₃ and Fe₂O₃ content of 66.75%; and the result of the investigations showed that ESP was predominantly of Calcium oxide (52.75%) and a combined SiO₂, Al₂O₃ and Fe₂O₃ content of 3.86%. The addition of GHA and ESP in concrete showed slight different in compressive strength with increase in GHA and ESP additive up to 5% and high decrease in compressive strength with further increase in GHA and ESP content. The 28 days compressive strength of the concrete cubes; compared with that of the control; showed a slight increase. Thus the use of GHA and ESP as partial replacement of cement will provide an economic use of by-product and consequently produce a cheaper concrete construction without comprising its strengthKeywords: additive, concrete, eggshell powder, groundnut husk ash compressive strength
Procedia PDF Downloads 1334923 Optimized Passive Heating for Multifamily Dwellings
Authors: Joseph Bostick
Abstract:
A method of decreasing the heating load of HVAC systems in a single-dwelling model of a multifamily building, by controlling movable insulation through the optimization of flux, time, surface incident solar radiation, and temperature thresholds. Simulations are completed using a co-simulation between EnergyPlus and MATLAB as an optimization tool to find optimal control thresholds. Optimization of the control thresholds leads to a significant decrease in total heating energy expenditure.Keywords: energy plus, MATLAB, simulation, energy efficiency
Procedia PDF Downloads 1734922 New Test Algorithm to Detect Acute and Chronic HIV Infection Using a 4th Generation Combo Test
Authors: Barun K. De
Abstract:
Acquired immunodeficiency syndrome (AIDS) is caused by two types of human immunodeficiency viruses, collectively designated HIV. HIV infection is spreading globally particularly in developing countries. Before an individual is diagnosed with HIV, the disease goes through different phases. First there is an acute early phase that is followed by an established or chronic phase. Subsequently, there is a latency period after which the individual becomes immunodeficient. It is in the acute phase that an individual is highly infectious due to a high viral load. Presently, HIV diagnosis involves use of tests that do not detect the acute phase infection during which both the viral RNA and p24 antigen are expressed. Instead, these less sensitive tests detect antibodies to viral antigens which are typically sero-converted later in the disease process following acute infection. These antibodies are detected in both asymptomatic HIV-infected individuals as well as AIDS patients. Studies indicate that early diagnosis and treatment of HIV infection can reduce medical costs, improve survival, and reduce spreading of infection to new uninfected partners. Newer 4th generation combination antigen/antibody tests are highly sensitive and specific for detection of acute and established HIV infection (HIV1 and HIV2) enabling immediate linkage to care. The CDC (Center of Disease Control, USA) recently recommended an algorithm involving three different tests to screen and diagnose acute and established infections of HIV-1 and HIV-2 in a general population. Initially a 4th generation combo test detects a viral antigen p24 and specific antibodies against HIV -1 and HIV-2 envelope proteins. If the test is positive it is followed by a second test known as a differentiation assay which detects antibodies against specific HIV-1 and HIV-2 envelope proteins confirming established infection of HIV-1 or HIV-2. However if it is negative then another test is performed that measures viral load confirming an acute HIV-1 infection. Screening results of a Phoenix area population detected 0.3% new HIV infections among which 32.4% were acute cases. Studies in the U.S. indicate that this algorithm effectively reduces HIV infection through immediate treatment and education following diagnosis.Keywords: new algorithm, HIV, diagnosis, infection
Procedia PDF Downloads 4094921 Proposed Algorithms to Assess Concussion Potential in Rear-End Motor Vehicle Collisions: A Meta-Analysis
Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin McCleery
Abstract:
Introduction: Mild traumatic brain injuries, also referred to as concussions, represent an increasing burden to society. Due to limited objective diagnostic measures, concussions are diagnosed by assessing subjective symptoms, often leading to disputes to their presence. Common biomechanical measures associated with concussion are high linear and/or angular acceleration to the head. With regards to linear acceleration, approximately 80g’s has previously been shown to equate with a 50% probability of concussion. Motor vehicle collisions (MVCs) are a leading cause of concussion, due to high head accelerations experienced. The change in velocity (delta-V) of a vehicle in an MVC is an established metric for impact severity. As acceleration is the rate of delta-V with respect to time, the purpose of this paper is to determine the relation between delta-V (and occupant parameters) with linear head acceleration. Methods: A meta-analysis was conducted for manuscripts collected using the following keywords: head acceleration, concussion, brain injury, head kinematics, delta-V, change in velocity, motor vehicle collision, and rear-end. Ultimately, 280 studies were surveyed, 14 of which fulfilled the inclusion criteria as studies investigating the human response to impacts, reporting head acceleration, and delta-V of the occupant’s vehicle. Statistical analysis was conducted with SPSS and R. The best fit line analysis allowed for an initial understanding of the relation between head acceleration and delta-V. To further investigate the effect of occupant parameters on head acceleration, a quadratic model and a full linear mixed model was developed. Results: From the 14 selected studies, 139 crashes were analyzed with head accelerations and delta-V values ranging from 0.6 to 17.2g and 1.3 to 11.1 km/h, respectively. Initial analysis indicated that the best line of fit (Model 1) was defined as Head Acceleration = 0.465Keywords: acceleration, brain injury, change in velocity, Delta-V, TBI
Procedia PDF Downloads 2314920 Problem Based Learning and Teaching by Example in Dimensioning of Mechanisms: Feedback
Authors: Nicolas Peyret, Sylvain Courtois, Gaël Chevallier
Abstract:
This article outlines the development of the Project Based Learning (PBL) at the level of a last year’s Bachelor’s Degree. This form of pedagogy has for objective to allow a better involving of the students from the beginning of the module. The theoretical contributions are introduced during the project to solving a technological problem. The module in question is the module of mechanical dimensioning method of Supméca a French engineering school. This school issues a Master’s Degree. While the teaching methods used in primary and secondary education are frequently renewed in France at the instigation of teachers and inspectors, higher education remains relatively traditional in its practices. Recently, some colleagues have felt the need to put the application back at the heart of their theoretical teaching. This need is induced by the difficulty of covering all the knowledge deductively before its application. It is therefore tempting to make the students 'learn by doing', even if it doesn’t cover some parts of the theoretical knowledge. The other argument that supports this type of learning is the lack of motivation the students have for the magisterial courses. The role-play allowed scenarios favoring interaction between students and teachers… However, this pedagogical form known as 'pedagogy by project' is difficult to apply in the first years of university studies because of the low level of autonomy and individual responsibility that the students have. The question of what the student actually learns from the initial program as well as the evaluation of the competences acquired by the students in this type of pedagogy also remains an open problem. Thus we propose to add to the pedagogy by project format a regressive part of interventionism by the teacher based on pedagogy by example. This pedagogical scenario is based on the cognitive load theory and Bruner's constructivist theory. It has been built by relying on the six points of the encouragement process defined by Bruner, with a concrete objective, to allow the students to go beyond the basic skills of dimensioning and allow them to acquire the more global skills of engineering. The implementation of project-based teaching coupled with pedagogy by example makes it possible to compensate for the lack of experience and autonomy of first-year students, while at the same time involving them strongly in the first few minutes of the module. In this project, students have been confronted with the real dimensioning problems and are able to understand the links and influences between parameter variations and dimensioning, an objective that we did not reach in classical teaching. It is this form of pedagogy which allows to accelerate the mastery of basic skills and so spend more time on the engineer skills namely the convergence of each dimensioning in order to obtain a validated mechanism. A self-evaluation of the project skills acquired by the students will also be presented.Keywords: Bruner's constructivist theory, mechanisms dimensioning, pedagogy by example, problem based learning
Procedia PDF Downloads 1894919 Dynamic Wind Effects in Tall Buildings: A Comparative Study of Synthetic Wind and Brazilian Wind Standard
Authors: Byl Farney Cunha Junior
Abstract:
In this work the dynamic three-dimensional analysis of a 47-story building located in Goiania city when subjected to wind loads generated using both the Wind Brazilian code, NBR6123 (ABNT, 1988) and the Synthetic-Wind method is realized. To model the frames three different methodologies are used: the shear building model and both bi and three-dimensional finite element models. To start the analysis, a plane frame is initially studied to validate the shear building model and, in order to compare the results of natural frequencies and displacements at the top of the structure the same plane frame was modeled using the finite element method through the SAP2000 V10 software. The same steps were applied to an idealized 20-story spacial frame that helps in the presentation of the stiffness correction process applied to columns. Based on these models the two methods used to generate the Wind loads are presented: a discrete model proposed in the Wind Brazilian code, NBR6123 (ABNT, 1988) and the Synthetic-Wind method. The method uses the Davenport spectrum which is divided into a variety of frequencies to generate the temporal series of loads. Finally, the 47- story building was analyzed using both the three-dimensional finite element method through the SAP2000 V10 software and the shear building model. The models were loaded with Wind load generated by the Wind code NBR6123 (ABNT, 1988) and by the Synthetic-Wind method considering different wind directions. The displacements and internal forces in columns and beams were compared and a comparative study considering a situation of a full elevated reservoir is realized. As can be observed the displacements obtained by the SAP2000 V10 model are greater when loaded with NBR6123 (ABNT, 1988) wind load related to the permanent phase of the structure’s response.Keywords: finite element method, synthetic wind, tall buildings, shear building
Procedia PDF Downloads 2724918 Influence of Decolourisation Condition on the Physicochemical Properties of Shea (Vitellaria paradoxa Gaertner F) Butter
Authors: Ahmed Mohammed Mohagir, Ahmat-Charfadine Mahamat, Nde Divine Bup, Richard Kamga, César Kapseu
Abstract:
In this investigation, kinetics studies of adsorption of colour material of shea butter showed a peak at the wavelength 440 nm and the equilibrium time was found to be 30 min. Response surface methodology applying Doehlert experimental design was used to investigate decolourisation parameters of crude shea butter. The decolourisation process was significantly influenced by three independent parameters: contact time, decolourisation temperature and adsorbent dose. The responses of the process were oil loss, acid value, peroxide value and colour index. Response surface plots were successfully made to visualise the effect of the independent parameters on the responses of the process.Keywords: decolourisation, doehlert experimental design, physicochemical characterisation, RSM, shea butter
Procedia PDF Downloads 4144917 Seismic Fragility for Sliding Failure of Weir Structure Considering the Process of Concrete Aging
Authors: HoYoung Son, Ki Young Kim, Woo Young Jung
Abstract:
This study investigated the change of weir structure performances when durability of concrete, which is the main material of weir structure, decreased due to their aging by mean of seismic fragility analysis. In the analysis, it was assumed that the elastic modulus of concrete was reduced by 10% in order to account for their aged deterioration. Additionally, the analysis of seismic fragility was based on Monte Carlo Simulation method combined with a 2D nonlinear finite element in ABAQUS platform with the consideration of deterioration of concrete. Finally, the comparison of seismic fragility of model pre- and post-deterioration was made to study the performance of weir. Results show that the probability of failure in moderate damage for deteriorated model was found to be larger than pre-deterioration model when peak ground acceleration (PGA) passed 0.4 g.Keywords: weir, FEM, concrete, fragility, aging
Procedia PDF Downloads 4234916 Response of Full-Scale Room Building Against Blast Loading
Authors: Eid Badshah, Amjad Naseer, Muhammad Ashraf
Abstract:
In this paper full-scale brick masonry room along with the veranda of a typical school building was subjected to eight successive blast tests with increasing charge weights ranging from 0.5kg to 16.02kg at 3.66m fixed stand-off distance. Pressure-time histories were obtained by data acquisition system from pressure sensors, installed on different points of room as well as veranda columns. The resulting damage pattern of different locations was observed during each test. Weak zones of masonry room were identified. Scaled distances for different damage levels in masonry room were experimentally obtained. The results provided a basis for determining the response of masonry room building against blast loading in a specific threat scenario.Keywords: peak pressure, composition-B, TNT, pressure sensor, scaled distance, masonry
Procedia PDF Downloads 1234915 Assessing the Severity of Traffic Related Air Pollution in South-East London to School Pupils
Authors: Ho Yin Wickson Cheung, Liora Malki-Epshtein
Abstract:
Outdoor air pollution presents a significant challenge for public health globally, especially in urban areas, with road traffic acting as the primary contributor to air pollution. Several studies have documented the antagonistic relation between traffic-related air pollution (TRAP) and the impact on health, especially to the vulnerable group of population, particularly young pupils. Generally, TRAP could cause damage to their brain, restricting the ability of children to learn and, more importantly, causing detrimental respiratory issues in later life. Butlittle is known about the specific exposure of children at school during the school day and the impact this may have on their overall exposure to pollution at a crucial time in their development. This project has set out to examine the air quality across primary schools in South-East London and assesses the variability of data found based on their geographic location and surroundings. Nitrogen dioxide, PM contaminants, and carbon dioxide were collected with diffusion tubes and portable monitoring equipment for eight schools across three local areas, that are Greenwich, Lewisham, and Tower Hamlets. This study first examines the geographical features of the schools surrounding (E.g., coverage of urban road structure and green infrastructure), then utilize three different methods to capture pollutants data. Moreover, comparing the obtained results with existing data from monitoring stations to understand the differences in air quality before and during the pandemic. Furthermore, most studies in this field have unfortunately neglected human exposure to pollutants and calculated based on values from fixed monitoring stations. Therefore, this paper introduces an alternative approach by calculating human exposure to air pollution from real-time data obtained when commuting within related areas (Driving routes and field walking). It is found that schools located highly close to motorways are generally not suffering from the most air pollution contaminants. Instead, one with the worst traffic congested routes nearby might also result in poor air quality. Monitored results also indicate that the annual air pollution values have slightly decreased during the pandemic. However, the majority of the data is currently still exceeding the WHO guidelines. Finally, the total human exposures for NO2 during commuting in the two selected routes were calculated. Results illustrated the total exposure for route 1 were 21,730 μm/m3 and 28,378.32 μm/m3, and for route 2 were 30,672 μm/m3 and 16,473 μm/m3. The variance that occurred might be due to the difference in traffic volume that requires further research. Exposure for NO2 during commuting was plotted with detailed timesteps that have shown their peak usually occurred while commuting. These have consolidated the initial assumption to the extremeness of TRAP. To conclude, this paper has yielded significant benefits to understanding air quality across schools in London with the new approach of capturing human exposure (Driving routes). Confirming the severity of air pollution and promoting the necessity of considering environmental sustainability for policymakers during decision making to protect society's future pillars.Keywords: air pollution, schools, pupils, congestion
Procedia PDF Downloads 1174914 Performance Assessment of Carrier Aggregation-Based Indoor Mobile Networks
Authors: Viktor R. Stoynov, Zlatka V. Valkova-Jarvis
Abstract:
The intelligent management and optimisation of radio resource technologies will lead to a considerable improvement in the overall performance in Next Generation Networks (NGNs). Carrier Aggregation (CA) technology, also known as Spectrum Aggregation, enables more efficient use of the available spectrum by combining multiple Component Carriers (CCs) in a virtual wideband channel. LTE-A (Long Term Evolution–Advanced) CA technology can combine multiple adjacent or separate CCs in the same band or in different bands. In this way, increased data rates and dynamic load balancing can be achieved, resulting in a more reliable and efficient operation of mobile networks and the enabling of high bandwidth mobile services. In this paper, several distinct CA deployment strategies for the utilisation of spectrum bands are compared in indoor-outdoor scenarios, simulated via the recently-developed Realistic Indoor Environment Generator (RIEG). We analyse the performance of the User Equipment (UE) by integrating the average throughput, the level of fairness of radio resource allocation, and other parameters, into one summative assessment termed a Comparative Factor (CF). In addition, comparison of non-CA and CA indoor mobile networks is carried out under different load conditions: varying numbers and positions of UEs. The experimental results demonstrate that the CA technology can improve network performance, especially in the case of indoor scenarios. Additionally, we show that an increase of carrier frequency does not necessarily lead to improved CF values, due to high wall-penetration losses. The performance of users under bad-channel conditions, often located in the periphery of the cells, can be improved by intelligent CA location. Furthermore, a combination of such a deployment and effective radio resource allocation management with respect to user-fairness plays a crucial role in improving the performance of LTE-A networks.Keywords: comparative factor, carrier aggregation, indoor mobile network, resource allocation
Procedia PDF Downloads 1784913 Batch and Fixed-Bed Studies of Ammonia Treated Coconut Shell Activated Carbon for Adsorption of Benzene and Toluene
Authors: Jibril Mohammed, Usman Dadum Hamza, Muhammad Idris Misau, Baba Yahya Danjuma, Yusuf Bode Raji, Abdulsalam Surajudeen
Abstract:
Volatile organic compounds (VOCs) have been reported to be responsible for many acute and chronic health effects and environmental degradations such as global warming. In this study, a renewable and low-cost coconut shell activated carbon (PHAC) was synthesized and treated with ammonia (PHAC-AM) to improve its hydrophobicity and affinity towards VOCs. Removal efficiencies and adsorption capacities of the ammonia treated activated carbon (PHAC-AM) for benzene and toluene were carried out through batch and fixed-bed studies respectively. Langmuir, Freundlich and Tempkin adsorption isotherms were tested for the adsorption process and the experimental data were best fitted by Langmuir model and least fitted by Tempkin model; the favourability and suitability of fitness were validated by equilibrium parameter (RL) and the root square mean deviation (RSMD). Judging by the deviation of the predicted values from the experimental values, pseudo-second-order kinetic model best described the adsorption kinetics than the pseudo-first-order kinetic model for the two VOCs on PHAC and PHAC-AM. In the fixed-bed study, the effect of initial VOC concentration, bed height and flow rate on benzene and toluene adsorption were studied. The highest bed capacities of 77.30 and 69.40 mg/g were recorded for benzene and toluene respectively; at 250 mg/l initial VOC concentration, 2.5 cm bed height and 4.5 ml/min flow rate. The results of this study revealed that ammonia treated activate carbon (PHAC-AM) is a sustainable adsorbent for treatment of VOCs in polluted waters.Keywords: volatile organic compounds, equilibrium and kinetics studies, batch and fixed bed study, bio-based activated carbon
Procedia PDF Downloads 2234912 The Potential of On-Demand Shuttle Services to Reduce Private Car Use
Authors: B. Mack, K. Tampe-Mai, E. Diesch
Abstract:
Findings of an ongoing discrete choice study of future transport mode choice will be presented. Many urban centers face the triple challenge of having to cope with ever increasing traffic congestion, environmental pollution, and greenhouse gas emission brought about by private car use. In principle, private car use may be diminished by extending public transport systems like bus lines, trams, tubes, and trains. However, there are limits to increasing the (perceived) spatial and temporal flexibility and reducing peak-time crowding of classical public transport systems. An emerging new type of system, publicly or privately operated on-demand shuttle bus services, seem suitable to ameliorate the situation. A fleet of on-demand shuttle busses operates without fixed stops and schedules. It may be deployed efficiently in that each bus picks up passengers whose itineraries may be combined into an optimized route. Crowding may be minimized by limiting the number of seats and the inter-seat distance for each bus. The study is conducted as a discrete choice experiment. The choice between private car, public transport, and shuttle service is registered as a function of several push and pull factors (financial costs, travel time, walking distances, mobility tax/congestion charge, and waiting time/parking space search time). After the completion of the discrete choice items, the study participant is asked to rate the three modes of transport with regard to the pull factors of comfort, safety, privacy, and opportunity to engage in activities like reading or surfing the internet. These ratings are entered as additional predictors into the discrete choice experiment regression model. The study is conducted in the region of Stuttgart in southern Germany. N=1000 participants are being recruited. Participants are between 18 and 69 years of age, hold a driver’s license, and live in the city or the surrounding region of Stuttgart. In the discrete choice experiment, participants are asked to assume they lived within the Stuttgart region, but outside of the city, and were planning the journey from their apartment to their place of work, training, or education during the peak traffic time in the morning. Then, for each item of the discrete choice experiment, they are asked to choose between the transport modes of private car, public transport, and on-demand shuttle in the light of particular values of the push and pull factors studied. The study will provide valuable information on the potential of switching from private car use to the use of on-demand shuttles, but also on the less desirable potential of switching from public transport to on-demand shuttle services. Furthermore, information will be provided on the modulation of these switching potentials by pull and push factors.Keywords: determinants of travel mode choice, on-demand shuttle services, private car use, public transport
Procedia PDF Downloads 1824911 Nonconventional Method for Separation of Rosmarinic Acid: Synergic Extraction
Authors: Lenuta Kloetzer, Alexandra C. Blaga, Dan Cascaval, Alexandra Tucaliuc, Anca I. Galaction
Abstract:
Rosmarinic acid, an ester of caffeic acid and 3-(3,4-dihydroxyphenyl) lactic acid, is considered a valuable compound for the pharmaceutical and cosmetic industries due to its antimicrobial, antioxidant, antiviral, anti-allergic, and anti-inflammatory effects. It can be obtained by extraction from vegetable or animal materials, by chemical synthesis and biosynthesis. Indifferent of the method used for rosmarinic acid production, the separation and purification process implies high amount of raw materials and laborious stages leading to high cost for and limitations of the separation technology. This study focused on separation of rosmarinic acid by synergic reactive extraction with a mixture of two extractants, one acidic (acid di-(2ethylhexyl) phosphoric acid, D2EHPA) and one with basic character (Amberlite LA-2). The studies were performed in experimental equipment consisting of an extraction column where the phases’ mixing was made by mean of a perforated disk with 45 mm diameter and 20% free section, maintained at the initial contact interface between the aqueous and organic phases. The vibrations had a frequency of 50 s⁻¹ and 5 mm amplitude. The extraction was carried out in two solvents with different dielectric constants (n-heptane and dichloromethane) in which the extractants mixture of varying concentration was dissolved. The pH-value of initial aqueous solution was varied between 1 and 7. The efficiency of the studied extraction systems was quantified by distribution and synergic coefficients. For calculating these parameters, the rosmarinic acid concentration in the initial aqueous solution and in the raffinate have been measured by HPLC. The influences of extractants concentrations and solvent polarity on the efficiency of rosmarinic acid separation by synergic extraction with a mixture of Amberlite LA-2 and D2EHPA have been analyzed. In the reactive extraction system with a constant concentration of Amberlite LA-2 in the organic phase, the increase of D2EHPA concentration leads to decrease of the synergic coefficient. This is because the increase of D2EHPA concentration prevents the formation of amine adducts and, consequently, affects the hydrophobicity of the interfacial complex with rosmarinic acid. For these reasons, the diminution of synergic coefficient is more important for dichloromethane. By maintaining a constant value of D2EHPA concentration and increasing the concentration of Amberlite LA-2, the synergic coefficient could become higher than 1, its highest values being reached for n-heptane. Depending on the solvent polarity and D2EHPA amount in the solvent phase, the synergic effect is observed for Amberlite LA-2 concentrations over 20 g/l dissolved in n-heptane. Thus, by increasing the concentration of D2EHPA from 5 to 40 g/l, the minimum concentration value of Amberlite LA-2 corresponding to synergism increases from 20 to 40 g/l for the solvent with lower polarity, namely, n-heptane, while there is no synergic effect recorded for dichloromethane. By analysing the influences of the main factors (organic phase polarity, extractant concentration in the mixture) on the efficiency of synergic extraction of rosmarinic acid, the most important synergic effect was found to correspond to the extractants mixture containing 5 g/l D2EHPA and 40 g/l Amberlite LA-2 dissolved in n-heptane.Keywords: Amberlite LA-2, di(2-ethylhexyl) phosphoric acid, rosmarinic acid, synergic effect
Procedia PDF Downloads 2884910 Fabrication of Highly Conductive Graphene/ITO Transparent Bi-Film through Chemical Vapor Deposition (CVD) and Organic Additives-Free Sol-Gel Techniques
Authors: Bastian Waduge Naveen Harindu Hemasiri, Jae-Kwan Kim, Ji-Myon Lee
Abstract:
Indium tin oxide (ITO) remains the industrial standard transparent conducting oxides with better performances. Recently, graphene becomes as a strong material with unique properties to replace the ITO. However, graphene/ITO hybrid composite material is a newly born field in the electronic world. In this study, the graphene/ITO composite bi-film was synthesized by a two steps process. 10 wt.% tin-doped, ITO thin films were produced by an environmentally friendly aqueous sol-gel spin coating technique with economical salts of In(NO3)3.H2O and SnCl4 without using organic additives. The wettability and surface free energy (97.6986 mJ/m2) enhanced oxygen plasma treated glass substrates were used to form voids free continuous ITO film. The spin-coated samples were annealed at 600 0C for 1 hour under low vacuum conditions to obtained crystallized, ITO film. The crystal structure and crystalline phases of ITO thin films were analyzed by X-ray diffraction (XRD) technique. The Scherrer equation was used to determine the crystallite size. Detailed information about chemical composition and elemental composition of the ITO film were determined by X-ray photoelectron spectroscopy (XPS) and energy dispersive X-ray spectroscopy (EDX) coupled with FE-SEM respectively. Graphene synthesis was done under chemical vapor deposition (CVD) method by using Cu foil at 1000 0C for 1 min. The quality of the synthesized graphene was characterized by Raman spectroscopy (532nm excitation laser beam) and data was collected at room temperature and normal atmosphere. The surface and cross-sectional observation were done by using FE-SEM. The optical transmission and sheet resistance were measured by UV-Vis spectroscopy and four point probe head at room temperature respectively. Electrical properties were also measured by using V-I characteristics. XRD patterns reveal that the films contain the In2O3 phase only and exhibit the polycrystalline nature of the cubic structure with the main peak of (222) plane. The peak positions of In3d5/2 (444.28 eV) and Sn3d5/2 (486.7 eV) in XPS results indicated that indium and tin are in the oxide form only. The UV-visible transmittance shows 91.35 % at 550 nm with 5.88 x 10-3 Ωcm specific resistance. The G and 2D band in Raman spectroscopy of graphene appear at 1582.52 cm-1 and 2690.54 cm-1 respectively when the synthesized CVD graphene on SiO2/Si. The determined intensity ratios of 2D to G (I2D/IG) and D to G (ID/IG) were 1.531 and 0.108 respectively. However, the above-mentioned G and 2D peaks appear at 1573.57 cm-1 and 2668.14 cm-1 respectively when the CVD graphene on the ITO coated glass, the positions of G and 2D peaks were red shifted by 8.948 cm-1 and 22.396 cm-1 respectively. This graphene/ITO bi-film shows modified electrical properties when compares with sol-gel derived ITO film. The reduction of sheet resistance in the bi-film was 12.03 % from the ITO film. Further, the fabricated graphene/ITO bi-film shows 88.66 % transmittance at 550 nm wavelength.Keywords: chemical vapor deposition, graphene, ITO, Raman Spectroscopy, sol-gel
Procedia PDF Downloads 2594909 The Analysis of TRACE/FRAPTRAN in the Fuel Rods of Maanshan PWR for LBLOCA
Authors: J. R. Wang, W. Y. Li, H. T. Lin, J. H. Yang, C. Shih, S. W. Chen
Abstract:
Fuel rod analysis program transient (FRAPTRAN) code was used to study the fuel rod performance during a postulated large break loss of coolant accident (LBLOCA) in Maanshan nuclear power plant (NPP). Previous transient results from thermal hydraulic code, TRACE, with the same LBLOCA scenario, were used as input boundary conditions for FRAPTRAN. The simulation results showed that the peak cladding temperatures and the fuel center line temperatures were all below the 10CFR50.46 LOCA criteria. In addition, the maximum hoop stress was 18 MPa and the oxide thickness was 0.003 mm for the present simulation cases, which are all within the safety operation ranges. The present study confirms that this analysis method, the FRAPTRAN code combined with TRACE, is an appropriate approach to predict the fuel integrity under LBLOCA with operational ECCS.Keywords: FRAPTRAN, TRACE, LOCA, PWR
Procedia PDF Downloads 5104908 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products
Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis
Abstract:
The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem
Procedia PDF Downloads 2564907 Employees’ Satisfaction and Engagement in UAE: Antecedents and Outcomes
Authors: Sareh Rajabi, Taha Anjamrooz, Ahmed Hassan Almarzooqi
Abstract:
Employee satisfaction, engagement, and performance are crucial for successful organizations. The performance of the employees now depends on their satisfaction level and whether they are satisfied with the management. Due to this fact, the organizations are now measuring the satisfaction level of their employees to increase profitability, productivity, and turnover. The aim of this research is to inspect the antecedents which direct in the direction of significant employee engagement and good job fit by finding the relationship between employee satisfaction and engagement. Based on an inclusive literature review on the employees’ satisfaction, engagement and performance, this research will conduct a study and survey in the UAE organizations in order to develop a framework for evaluating the impact of factors like employee satisfaction and engagement on the operation as an outcome by using statistical analysis. This study will allow in understanding the advantages of containing satisfied employees and how they perform in their peak motivation to make the company more profitable and competitive.Keywords: employees’ satisfaction, employees’ engagement, antecedents, outcomes
Procedia PDF Downloads 1494906 An Inquiry into the Usage of Complex Systems Models to Examine the Effects of the Agent Interaction in a Political Economic Environment
Authors: Ujjwall Sai Sunder Uppuluri
Abstract:
Group theory is a powerful tool that researchers can use to provide a structural foundation for their Agent Based Models. These Agent Based models are argued by this paper to be the future of the Social Science Disciplines. More specifically, researchers can use them to apply evolutionary theory to the study of complex social systems. This paper illustrates one such example of how theoretically an Agent Based Model can be formulated from the application of Group Theory, Systems Dynamics, and Evolutionary Biology to analyze the strategies pursued by states to mitigate risk and maximize usage of resources to achieve the objective of economic growth. This example can be applied to other social phenomena and this makes group theory so useful to the analysis of complex systems, because the theory provides the mathematical formulaic proof for validating the complex system models that researchers build and this will be discussed by the paper. The aim of this research, is to also provide researchers with a framework that can be used to model political entities such as states on a 3-dimensional plane. The x-axis representing resources (tangible and intangible) available to them, y the risks, and z the objective. There also exist other states with different constraints pursuing different strategies to climb the mountain. This mountain’s environment is made up of risks the state faces and resource endowments. This mountain is also layered in the sense that it has multiple peaks that must be overcome to reach the tallest peak. A state that sticks to a single strategy or pursues a strategy that is not conducive to the climbing of that specific peak it has reached is not able to continue advancement. To overcome the obstacle in the state’s path, it must innovate. Based on the definition of a group, we can categorize each state as being its own group. Each state is a closed system, one which is made up of micro level agents who have their own vectors and pursue strategies (actions) to achieve some sub objectives. The state also has an identity, the inverse being anarchy and/or inaction. Finally, the agents making up a state interact with each other through competition and collaboration to mitigate risks and achieve sub objectives that fall within the primary objective. Thus, researchers can categorize the state as an organism that reflects the sum of the output of the interactions pursued by agents at the micro level. When states compete, they employ a strategy and that state which has the better strategy (reflected by the strategies pursued by her parts) is able to out-compete her counterpart to acquire some resource, mitigate some risk or fulfil some objective. This paper will attempt to illustrate how group theory combined with evolutionary theory and systems dynamics can allow researchers to model the long run development, evolution, and growth of political entities through the use of a bottom up approach.Keywords: complex systems, evolutionary theory, group theory, international political economy
Procedia PDF Downloads 1384905 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units
Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro
Abstract:
In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.Keywords: capacitated clustering, k-means, genetic algorithm, districting problems
Procedia PDF Downloads 195