Search results for: multi-objective particle swarm optimization
2744 Improving Patient-Care Services at an Oncology Center with a Flexible Adaptive Scheduling Procedure
Authors: P. Hooshangitabrizi, I. Contreras, N. Bhuiyan
Abstract:
This work presents an online scheduling problem which accommodates multiple requests of patients for chemotherapy treatments in a cancer center of a major metropolitan hospital in Canada. To solve the problem, an adaptive flexible approach is proposed which systematically combines two optimization models. The first model is intended to dynamically schedule arriving requests in the form of waiting lists whereas the second model is used to reschedule the already booked patients with the goal of finding better resource allocations when new information becomes available. Both models are created as mixed integer programming formulations. Various controllable and flexible parameters such as deviating the prescribed target dates by a pre-determined threshold, changing the start time of already booked appointments and the maximum number of appointments to move in the schedule are included in the proposed approach to have sufficient degrees of flexibility in handling arrival requests and unexpected changes. Several computational experiments are conducted to evaluate the performance of the proposed approach using historical data provided by the oncology clinic. Our approach achieves outstandingly better results as compared to those of the scheduling system being used in practice. Moreover, several analyses are conducted to evaluate the effect of considering different levels of flexibility on the obtained results and to assess the performance of the proposed approach in dealing with last-minute changes. We strongly believe that the proposed flexible adaptive approach is very well-suited for implementation at the clinic to provide better patient-care services and to utilize available resource more efficiently.Keywords: chemotherapy scheduling, multi-appointment modeling, optimization of resources, satisfaction of patients, mixed integer programming
Procedia PDF Downloads 1712743 The Path to Ruthium: Insights into the Creation of a New Element
Authors: Goodluck Akaoma Ordu
Abstract:
Ruthium (Rth) represents a theoretical superheavy element with an atomic number of 119, proposed within the context of advanced materials science and nuclear physics. The conceptualization of Rth involves theoretical frameworks that anticipate its atomic structure, including a hypothesized stable isotope, Rth-320, characterized by 119 protons and 201 neutrons. The synthesis of Ruthium (Rth) hinges on intricate nuclear fusion processes conducted in state-of-the-art particle accelerators, notably utilizing Calcium-48 (Ca-48) as a projectile nucleus and Einsteinium-253 (Es-253) as a target nucleus. These experiments aim to induce fusion reactions that yield Ruthium isotopes, such as Rth-301, accompanied by neutron emission. Theoretical predictions outline various physical and chemical properties attributed to Ruthium (Rth). It is envisaged to possess a high density, estimated at around 25 g/cm³, with melting and boiling points anticipated to be exceptionally high, approximately 4000 K and 6000 K, respectively. Chemical studies suggest potential oxidation states of +2, +3, and +4, indicating a versatile reactivity, particularly with halogens and chalcogens. The atomic structure of Ruthium (Rth) is postulated to feature an electron configuration of [Rn] 5f^14 6d^10 7s^2 7p^2, reflecting its position in the periodic table as a superheavy element. However, the creation and study of superheavy elements like Ruthium (Rth) pose significant challenges. These elements typically exhibit very short half-lives, posing difficulties in their stabilization and detection. Research efforts are focused on identifying the most stable isotopes of Ruthium (Rth) and developing advanced detection methodologies to confirm their existence and properties. Specialized detectors are essential in observing decay patterns unique to Ruthium (Rth), such as alpha decay or fission signatures, which serve as key indicators of its presence and characteristics. The potential applications of Ruthium (Rth) span across diverse technological domains, promising innovations in energy production, material strength enhancement, and sensor technology. Incorporating Ruthium (Rth) into advanced energy systems, such as the Arc Reactor concept, could potentially amplify energy output efficiencies. Similarly, integrating Ruthium (Rth) into structural materials, exemplified by projects like the NanoArc gauntlet, could bolster mechanical properties and resilience. Furthermore, Ruthium (Rth)--based sensors hold promise for achieving heightened sensitivity and performance in various sensing applications. Looking ahead, the study of Ruthium (Rth) represents a frontier in both fundamental science and applied research. It underscores the quest to expand the periodic table and explore the limits of atomic stability and reactivity. Future research directions aim to delve deeper into Ruthium (Rth)'s atomic properties under varying conditions, paving the way for innovations in nanotechnology, quantum materials, and beyond. The synthesis and characterization of Ruthium (Rth) stand as a testament to human ingenuity and technological advancement, pushing the boundaries of scientific understanding and engineering capabilities. In conclusion, Ruthium (Rth) embodies the intersection of theoretical speculation and experimental pursuit in the realm of superheavy elements. It symbolizes the relentless pursuit of scientific excellence and the potential for transformative technological breakthroughs. As research continues to unravel the mysteries of Ruthium (Rth), it holds the promise of reshaping materials science and opening new frontiers in technological innovation.Keywords: superheavy element, nuclear fusion, bombardment, particle accelerator, nuclear physics, particle physics
Procedia PDF Downloads 392742 Regret-Regression for Multi-Armed Bandit Problem
Authors: Deyadeen Ali Alshibani
Abstract:
In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.Keywords: optimal, bandit problem, optimization, dynamic programming
Procedia PDF Downloads 4542741 Surface Coating of Polyester Fabrics by Sol Gel Synthesized ZnO Particles
Authors: Merve Küçük, M. Lütfi Öveçoğlu
Abstract:
Zinc oxide particles were synthesized using the sol-gel method and dip coated on polyester fabric. X-ray diffraction (XRD) analysis revealed a single crystal phase of ZnO particles. Chemical characteristics of the polyester fabric surface were investigated using attenuated total reflection-Fourier transform infrared (ATR-FTIR) measurements. Morphology of ZnO coated fabric was analyzed using field emission scanning electron microscopy (FESEM). After particle analysis, the aqueous ZnO solution resulted in a narrow size distribution at submicron levels. The deposit of ZnO on polyester fabrics yielded a homogeneous spread of spherical particles. Energy dispersive X-ray spectroscopy (EDX) results also affirmed the presence of ZnO particles on the polyester fabrics.Keywords: dip coating, polyester fabrics, sol gel, zinc oxide
Procedia PDF Downloads 4352740 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations
Authors: Deepak Singh, Rail Kuliev
Abstract:
The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization
Procedia PDF Downloads 722739 Designing and Simulation of the Rotor and Hub of the Unmanned Helicopter
Authors: Zbigniew Czyz, Ksenia Siadkowska, Krzysztof Skiba, Karol Scislowski
Abstract:
Today’s progress in the rotorcraft is mostly associated with an optimization of aircraft performance achieved by active and passive modifications of main rotor assemblies and a tail propeller. The key task is to improve their performance, improve the hover quality factor for rotors but not change in specific fuel consumption. One of the tasks to improve the helicopter is an active optimization of the main rotor providing for flight stages, i.e., an ascend, flight, a descend. An active interference with the airflow around the rotor blade section can significantly change characteristics of the aerodynamic airfoil. The efficiency of actuator systems modifying aerodynamic coefficients in the current solutions is relatively high and significantly affects the increase in strength. The solution to actively change aerodynamic characteristics assumes a periodic change of geometric features of blades depending on flight stages. Changing geometric parameters of blade warping enables an optimization of main rotor performance depending on helicopter flight stages. Structurally, an adaptation of shape memory alloys does not significantly affect rotor blade fatigue strength, which contributes to reduce costs associated with an adaptation of the system to the existing blades, and gains from a better performance can easily amortize such a modification and improve profitability of such a structure. In order to obtain quantitative and qualitative data to solve this research problem, a number of numerical analyses have been necessary. The main problem is a selection of design parameters of the main rotor and a preliminary optimization of its performance to improve the hover quality factor for rotors. This design concept assumes a three-bladed main rotor with a chord of 0.07 m and radius R = 1 m. The value of rotor speed is a calculated parameter of an optimization function. To specify the initial distribution of geometric warping, a special software has been created that uses a numerical method of a blade element which respects dynamic design features such as fluctuations of a blade in its joints. A number of performance analyses as a function of rotor speed, forward speed, and altitude have been performed. The calculations were carried out for the full model assembly. This approach makes it possible to observe the behavior of components and their mutual interaction resulting from the forces. The key element of each rotor is the shaft, hub and pins holding the joints and blade yokes. These components are exposed to the highest loads. As a result of the analysis, the safety factor was determined at the level of k > 1.5, which gives grounds to obtain certification for the strength of the structure. The construction of the joint rotor has numerous moving elements in its structure. Despite the high safety factor, the places with the highest stresses, where the signs of wear and tear may appear, have been indicated. The numerical analysis carried out showed that the most loaded element is the pin connecting the modular bearing of the blade yoke with the element of the horizontal oscillation joint. The stresses in this element result in a safety factor of k=1.7. The other analysed rotor components have a safety factor of more than 2 and in the case of the shaft, this factor is more than 3. However, it must be remembered that the structure is as strong as the weakest cell is. Designed rotor for unmanned aerial vehicles adapted to work with blades with intelligent materials in its structure meets the requirements for certification testing. Acknowledgement: This work has been financed by the Polish National Centre for Research and Development under the LIDER program, Grant Agreement No. LIDER/45/0177/L-9/17/NCBR/2018.Keywords: main rotor, rotorcraft aerodynamics, shape memory alloy, materials, unmanned helicopter
Procedia PDF Downloads 1602738 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction
Authors: Luis C. Parra
Abstract:
The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms
Procedia PDF Downloads 1082737 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 2312736 Investigation of the Effect of Nano-Alumina Particles on Adsorption Property of Acrylic Fiber
Authors: Mehdi Ketabchi, Shallah Alijanlo
Abstract:
The flue gas from fossil fuels combustion contains harmful pollutants dangerous for human health and environment. One of the air pollution control methods to restrict the emission of these pollutants is based on using the nanoparticle in adsorption process. In the present research, gamma nano-alumina particle is added to polyacrylonitrile (PAN) polymer through simple loading method, and the adsorption capacity of the wet spun fiber is investigated. The results of exposure the fiber to the acid gases including SO2, CO, NO2, NO, and CO2 show the noticeable increase of gas adsorption capacity on fiber contains nanoparticle. The research has been conducted in Acrylic II Plant of Polyacryl Iran Corporation.Keywords: acrylic fiber, adsorbent, wet spun, polyacryl company, nano gamma alumina
Procedia PDF Downloads 1792735 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia
Authors: Yonas Shuke Kitawa
Abstract:
Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix
Procedia PDF Downloads 832734 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network
Authors: Ziying Wu, Danfeng Yan
Abstract:
Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network
Procedia PDF Downloads 1202733 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach
Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.Keywords: CO2 emissions, performance based design, optimization, sustainable design
Procedia PDF Downloads 4082732 Construction and Optimization of Green Infrastructure Network in Mountainous Counties Based on Morphological Spatial Pattern Analysis and Minimum Cumulative Resistance Models: A Case Study of Shapingba District, Chongqing
Authors: Yuning Guan
Abstract:
Under the background of rapid urbanization, mountainous counties need to break through mountain barriers for urban expansion due to undulating topography, resulting in ecological problems such as landscape fragmentation and reduced biodiversity. Green infrastructure networks are constructed to alleviate the contradiction between urban expansion and ecological protection, promoting the healthy and sustainable development of urban ecosystems. This study applies the MSPA model, the MCR model and Linkage Mapper Tools to identify eco-sources and eco-corridors in the Shapingba District of Chongqing and combined with landscape connectivity assessment and circuit theory to delineate the importance levels to extract ecological pinch point areas on the corridors. The results show that: (1) 20 ecological sources are identified, with a total area of 126.47 km², accounting for 31.88% of the study area, and showing a pattern of ‘one core, three corridors, multi-point distribution’. (2) 37 ecological corridors are formed in the area, with a total length of 62.52km, with a ‘more in the west, less in the east’ pattern. (3) 42 ecological pinch points are extracted, accounting for 25.85% of the length of the corridors, which are mainly distributed in the eastern new area. Accordingly, this study proposes optimization strategies for sub-area protection of ecological sources, grade-level construction of ecological corridors, and precise restoration of ecological pinch points.Keywords: green infrastructure network, morphological spatial pattern, minimal cumulative resistance, mountainous counties, circuit theory, shapingba district
Procedia PDF Downloads 472731 Enhancing Algal Bacterial Photobioreactor Efficiency: Nutrient Removal and Cost Analysis Comparison for Light Source Optimization
Authors: Shahrukh Ahmad, Purnendu Bose
Abstract:
Algal-Bacterial photobioreactors (ABPBRs) have emerged as a promising technology for sustainable biomass production and wastewater treatment. Nutrient removal is seldom done in sewage treatment plants and large volumes of wastewater which still have nutrients are being discharged and that can lead to eutrophication. That is why ABPBR plays a vital role in wastewater treatment. However, improving the efficiency of ABPBR remains a significant challenge. This study aims to enhance ABPBR efficiency by focusing on two key aspects: nutrient removal and cost-effective optimization of the light source. By integrating nutrient removal and cost analysis for light source optimization, this study proposes practical strategies for improving ABPBR efficiency. To reduce organic carbon and convert ammonia to nitrates, domestic wastewater from a 130 MLD sewage treatment plant (STP) was aerated with a hydraulic retention time (HRT) of 2 days. The treated supernatant had an approximate nitrate and phosphate values of 16 ppm as N and 6 ppm as P, respectively. This supernatant was then fed into the ABPBR, and the removal of nutrients (nitrate as N and phosphate as P) was observed using different colored LED bulbs, namely white, blue, red, yellow, and green. The ABPBR operated with a 9-hour light and 3-hour dark cycle, using only one color of bulbs per cycle. The study found that the white LED bulb, with a photosynthetic photon flux density (PPFD) value of 82.61 µmol.m-2 .sec-1 , exhibited the highest removal efficiency. It achieved a removal rate of 91.56% for nitrate and 86.44% for phosphate, surpassing the other colored bulbs. Conversely, the green LED bulbs showed the lowest removal efficiencies, with 58.08% for nitrate and 47.48% for phosphate at an HRT of 5 days. The quantum PAR (Photosynthetic Active Radiation) meter measured the photosynthetic photon flux density for each colored bulb setting inside the photo chamber, confirming that white LED bulbs operated at a wider wavelength band than the others. Furthermore, a cost comparison was conducted for each colored bulb setting. The study revealed that the white LED bulb had the lowest average cost (Indian Rupee)/light intensity (µmol.m-2 .sec-1 ) value at 19.40, while the green LED bulbs had the highest average cost (INR)/light intensity (µmol.m-2 .sec-1 ) value at 115.11. Based on these comparative tests, it was concluded that the white LED bulbs were the most efficient and costeffective light source for an algal photobioreactor. They can be effectively utilized for nutrient removal from secondary treated wastewater which helps in improving the overall wastewater quality before it is discharged back into the environment.Keywords: algal bacterial photobioreactor, domestic wastewater, nutrient removal, led bulbs
Procedia PDF Downloads 812730 Factors Affecting Aluminum Dissolve from Acidified Water Purification Sludge
Authors: Wen Po Cheng, Chi Hua Fu, Ping Hung Chen, Ruey Fang Yu
Abstract:
Recovering resources from water purification sludge (WPS) have been gradually stipulated in environmental protection laws and regulations in many nations. Hence, reusing the WPS is becoming an important topic, and recovering alum from WPS is one of the many practical alternatives. Most previous research efforts have been conducted on studying the amphoteric characteristic of aluminum hydroxide for investigating the optimum pH range to dissolve the Al(III) species from WPS, but it has been lack of reaction kinetics or mechanisms related discussion. Therefore, in this investigation, water purification sludge (WPS) solution was broken by ultrasound to make particle size of reactants smaller, specific surface area larger. According to the reaction kinetics, these phenomena let the dissolved aluminum salt quantity increased and the reaction rate go faster.Keywords: aluminum, acidification, sludge, recovery
Procedia PDF Downloads 6322729 Approximate Spring Balancing for the Arm of a Humanoid Robot to Reduce Actuator Torque
Authors: Apurva Patil, Ashay Aswale, Akshay Kulkarni, Shubham Bharadiya
Abstract:
The potential benefit of gravity compensation of linkages in mechanisms using springs to reduce actuator requirements is well recognized, but practical applications have been elusive. Although existing methods provide exact spring balance, they require additional masses or auxiliary links, or all the springs used originate from the ground, which makes the resulting device bulky and space-inefficient. This paper uses a method of static balancing of mechanisms with conservative loads such as gravity and spring loads using non-zero-free-length springs with child–parent connections and no auxiliary links. Application of this method to the developed arm of a humanoid robot is presented here. Spring balancing is particularly important in this case because the serial chain of linkages has to work against gravity.This work involves approximate spring balancing of the open-loop chain of linkages using minimization of potential energy variance. It uses the approach of flattening the potential energy distribution over the workspace and fuses it with numerical optimization. The results show the considerable reduction in actuator torque requirement with practical spring design and arrangement. Reduced actuator torque facilitates the use of lower end actuators which are generally smaller in weight and volume thereby lowering the space requirements and the total weight of the arm. This is particularly important for humanoid robots where the parent actuator has to handle the weight of the subsequent actuators as well. Actuators with lower actuation requirements are more energy efficient, thereby reduce the energy consumption of the mechanism. Lower end actuators are lower in cost and facilitate the development of low-cost devices. Although the method provides only an approximate balancing, it is versatile, flexible in choosing appropriate control variables that are relevant to the design problem and easy to implement. The true potential of this technique lies in the fact that it uses a very simple optimization to find the spring constant, free-length of the spring and the optimal attachment points subject to the optimization constraints. Also, it uses physically realizable non-zero-free-length springs directly, thereby reducing the complexity involved in simulating zero-free-length springs from non-zero-free-length springs. This method allows springs to be attached to the preceding parent link, which makes the implementation of spring balancing practical. Because auxiliary linkages can be avoided, the resultant arm of the humanoid robot is compact. The cost benefits and reduced complexity can be significant advantages in the development of this arm of the humanoid robot.Keywords: actuator torque, child-parent connections, spring balancing, the arm of a humanoid robot
Procedia PDF Downloads 2462728 Mechanochemical Synthesis of Al2O3/Mo Nanocomposite Powders from Molybdenum Oxide
Authors: Behrooz Ghasemi, Bahram Sharijian
Abstract:
Al2O3/Mo nanocomposite powders were successfully synthesized by mechanical milling through mechanochemical reaction between MoO3 and Al. The structural evolutions of powder particles during mechanical milling were studied by X-ray diffractometry (XRD), energy dispersive X-ray spectroscopy(EDX) and scanning electron microscopy (SEM). Results show that Al2O3-Mo was completely obtained after 5 hr of milling. The crystallite sizes of Al2O3 and Mo after milling for 20 hr were about 45 nm and 23 nm, respectively. With longer milling time, the intensities of Al2O3 and Mo peaks decreased and became broad due to the decrease in crystallite size. Morphological features of powders were influenced by the milling time. The resulting Al2O3- Mo nanocomposite powder exhibited an average particle size of 200 nm after 20 hr of milling. Also nanocomposite powder after 10 hr milling had relatively equiaxed shape with uniformly distributed Mo phase in Al2O3 matrix.Keywords: Al2O3/Mo, nanocomposites, mechanochemical, mechanical milling
Procedia PDF Downloads 3692727 Preparation and Properties of Gelatin-Bamboo Fibres Foams for Packaging Applications
Authors: Luo Guidong, Song Hang, Jim Song, Virginia Martin Torrejon
Abstract:
Due to their excellent properties, polymer packaging foams have become increasingly essential in our current lifestyles. They are cost-effective and lightweight, with excellent mechanical and thermal insulation properties. However, they constitute a major environmental and health concern due to litter generation, ocean pollution, and microplastic contamination of the food chain. In recent years, considerable efforts have been made to develop more sustainable alternatives to conventional polymer packaging foams. As a result, biobased and compostable foams are increasingly becoming commercially available, such as starch-based loose-fill or PLA trays. However, there is still a need for bulk manufacturing of bio-foams planks for packaging applications as a viable alternative to their fossil fuel counterparts (i.e., polystyrene, polyethylene, and polyurethane). Gelatin is a promising biopolymer for packaging applications due to its biodegradability, availability, and biocompatibility, but its mechanical properties are poor compared to conventional plastics. However, as widely reported for other biopolymers, such as starch, the mechanical properties of gelatin-based bioplastics can be enhanced by formulation optimization, such as the incorporation of fibres from different crops, such as bamboo. This research aimed to produce gelatin-bamboo fibre foams by mechanical foaming and to study the effect of fibre content on the foams' properties and structure. As a result, foams with virtually no shrinkage, low density (<40 kg/m³), low thermal conductivity (<0.044 W/m•K), and mechanical properties comparable to conventional plastics were produced. Further work should focus on developing formulations suitable for the packaging of water-sensitive products and processing optimization, especially the reduction of the drying time.Keywords: biobased and compostable foam, sustainable packaging, natural polymer hydrogel, cold chain packaging
Procedia PDF Downloads 1082726 Structural, Optical, And Ferroelectric Properties Of BaTiO3 Sintered At Different Temperatures
Authors: Anurag Gaur, Neha Sharma
Abstract:
In this work, we have synthesized BaTiO3 via sol gel method by sintering at different temperatures (600-1000 0C) and studied their structural, optical and ferroelectric properties through X-Ray diffraction (XRD), UV-Vis spectrophotometer and PE Loop Tracer. X-Ray diffraction patterns of barium titanate samples show that the peaks of the diffractogram are successfully indexed with the tetragonal structure of BaTiO3 along with some minor impurities of BaCO3. The optical band gap calculated through UV Visible spectrophotometer varies from 4.37 to 3.80 eV for the samples sintered at 600 to 1000 0 C, respectively. The particle size calculated through transmission electron microscopy varies from 20 to 60 nm for the samples sintered at 600 to 1000 0C, respectively. Moreover, it has been observed that the ferroelectricity reduces as we increase the sintering temperature.Keywords: nanostructures, ferroelectricity, sol-gel method, diffractogram
Procedia PDF Downloads 4282725 A Study in Optimization of FSI(Floor Space Index) in Kerala
Authors: Anjali Suresh
Abstract:
Kerala is well known for its unique settlement pattern; comprising the most part, a continuous spread of habitation. The notable urbanization trend in Kerala is urban spread rather than concentration which points out the increasing urbanization of peripheral areas of existing urban centers. This has thrown a challenge for the authorities to cater the needs of the urban population like to provide affordable housing and infrastructure facilities to sustain their livelihood; which is a matter of concern that needs policy attention in fixing the optimum FSI value. Based on recent reports (Post Disaster Need Analysis –PDNA) from the UN, addressing the unsafe situation of the carpet FAR/FSI practice in the state showcasing the varying geological & climatic conditions should also be the matter of concern. The FSI (Floor space index- the ratio of the built-up space on a plot to the area of the plot) value is certainly one of the key regulation factors in checking the land utilization for the varying occupancies desired for the overall development of a state with limitation in land availability when compared to its neighbors. The pattern of urbanization, physical conditions, topography, etc., varies within the state and can change remarkably over time which identifies that the practicing FSI norms in Kerala does not fulfils the intended function. Thus the FSI regulation is expected to change dynamically from location to location. So for determining the optimum value of FSI /FAR of a region in the state of Kerala, the government agencies should consider the optimum land utilization for the growing urbanization. On the other hand, shall keep in check the overutilization of the same in par with environmental and geographic nature. Therefore the study identifies parameters that should be considered for assigning FSI within the Kerala context, and through expert surveys; opinions arrive at a methodology for assigning an optimum FSI value of a region in the state of Kerala.Keywords: floor space index, urbanization, density, civic pressure, optimization
Procedia PDF Downloads 1052724 Response Surface Methodology for the Optimization of Radioactive Wastewater Treatment with Chitosan-Argan Nutshell Beads
Authors: Fatima Zahra Falah, Touria El. Ghailassi, Samia Yousfi, Ahmed Moussaif, Hasna Hamdane, Mouna Latifa Bouamrani
Abstract:
The management and treatment of radioactive wastewater pose significant challenges to environmental safety and public health. This study presents an innovative approach to optimizing radioactive wastewater treatment using a novel biosorbent: chitosan-argan nutshell beads. By employing Response Surface Methodology (RSM), we aimed to determine the optimal conditions for maximum removal efficiency of radioactive contaminants. Chitosan, a biodegradable and non-toxic biopolymer, was combined with argan nutshell powder to create composite beads. The argan nutshell, a waste product from argan oil production, provides additional adsorption sites and mechanical stability to the biosorbent. The beads were characterized using Fourier Transform Infrared Spectroscopy (FTIR), Scanning Electron Microscopy (SEM), and X-ray Diffraction (XRD) to confirm their structure and composition. A three-factor, three-level Box-Behnken design was utilized to investigate the effects of pH (3-9), contact time (30-150 minutes), and adsorbent dosage (0.5-2.5 g/L) on the removal efficiency of radioactive isotopes, primarily focusing on cesium-137. Batch adsorption experiments were conducted using synthetic radioactive wastewater with known concentrations of these isotopes. The RSM analysis revealed that all three factors significantly influenced the adsorption process. A quadratic model was developed to describe the relationship between the factors and the removal efficiency. The model's adequacy was confirmed through analysis of variance (ANOVA) and various diagnostic plots. Optimal conditions for maximum removal efficiency were pH 6.8, a contact time of 120 minutes, and an adsorbent dosage of 0.8 g/L. Under these conditions, the experimental removal efficiency for cesium-137 was 94.7%, closely matching the model's predictions. Adsorption isotherms and kinetics were also investigated to elucidate the mechanism of the process. The Langmuir isotherm and pseudo-second-order kinetic model best described the adsorption behavior, indicating a monolayer adsorption process on a homogeneous surface. This study demonstrates the potential of chitosan-argan nutshell beads as an effective and sustainable biosorbent for radioactive wastewater treatment. The use of RSM allowed for the efficient optimization of the process parameters, potentially reducing the time and resources required for large-scale implementation. Future work will focus on testing the biosorbent's performance with real radioactive wastewater samples and investigating its regeneration and reusability for long-term applications.Keywords: adsorption, argan nutshell, beads, chitosan, mechanism, optimization, radioactive wastewater, response surface methodology
Procedia PDF Downloads 392723 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.Keywords: harmonics, passive filter, power factor, power quality
Procedia PDF Downloads 3082722 The Sub-Optimality of the Electricity Subsidy on Tube Wells in Balochistan (Pakistan): An Analysis Based on Socio-Cultural and Policy Distortions
Authors: Rameesha Javaid
Abstract:
Agriculture is the backbone of the economy of the province of Balochistan which is known as the ‘fruit basket’ of Pakistan. Its climate zones comprising highlands and plateaus, dependent on rain water, are more suited for the production of deciduous fruit. The vagaries of weather and more so the persistent droughts prompted the government to announce flat rates of electricity bills per month irrespective of the size of the farm, quantum or water used and the category of crop group. That has, no doubt, resulted in increased cropping intensity, more production and employment but has enormously burdened the official exchequer which picks up the residual bills in certain percentages amongst the federal and provincial governments and the local electricity company. This study tests the desirability of continuing the subsidy in the present mode. Optimization of social welfare of farmers has been the focus of the study with emphasis on the contribution of positive externalities and distortions caused in terms of negative externalities. By using the optimization technique with due allowance for distortions, it has been established that the subsidy calls for limiting policy distortions as they cause sub-optimal utilization of the tube well subsidy and improved policy programming. The sensitivity analysis with changed rankings of contributing variables towards social welfare does not significantly change the result. Therefore it leads to the net findings and policy recommendations of significantly reducing the subsidy size, correcting and curtailing policy distortions and targeting the subsidy grant more towards small farmers to generate more welfare by saving a sizeable amount from the subsidy for investment in the wellbeing of the farmers in rural Balochistan.Keywords: distortion, policy distortion, socio-cultural distortion, social welfare, subsidy
Procedia PDF Downloads 2932721 Design and Optimisation of 2-Oxoglutarate Dioxygenase Expression in Escherichia coli Strains for Production of Bioethylene from Crude Glycerol
Authors: Idan Chiyanzu, Maruping Mangena
Abstract:
Crude glycerol, a major by-product from the transesterification of triacylglycerides with alcohol to biodiesel, is known to have a broad range of applications. For example, its bioconversion can afford a wide range of chemicals including alcohols, organic acids, hydrogen, solvents and intermediate compounds. In bacteria, the 2-oxoglutarate dioxygenase (2-OGD) enzymes are widely found among the Pseudomonas syringae species and have been recognized with an emerging importance in ethylene formation. However, the use of optimized enzyme function in recombinant systems for crude glycerol conversion to ethylene is still not been reported. The present study investigated the production of ethylene from crude glycerol using engineered E. coli MG1655 and JM109 strains. Ethylene production with an optimized expression system for 2-OGD in E. coli using a codon optimized construct of the ethylene-forming gene was studied. The codon-optimization resulted in a 20-fold increase of protein production and thus an enhanced production of the ethylene gas. For a reliable bioreactor performance, the effect of temperature, fermentation time, pH, substrate concentration, the concentration of methanol, concentration of potassium hydroxide and media supplements on ethylene yield was investigated. The results demonstrate that the recombinant enzyme can be used for future studies to exploit the conversion of low-priced crude glycerol into advanced value products like light olefins, and tools including recombineering techniques for DNA, molecular biology, and bioengineering can be used to allowing unlimited the production of ethylene directly from the fermentation of crude glycerol. It can be concluded that recombinant E.coli production systems represent significantly secure, renewable and environmentally safe alternative to thermochemical approach to ethylene production.Keywords: crude glycerol, bioethylene, recombinant E. coli, optimization
Procedia PDF Downloads 2802720 Properties of Cement Pastes with Different Particle Size Fractions of Metakaolin
Authors: M. Boháč, R. Novotný, F. Frajkorová, R. S. Yadav, T. Opravil, M. Palou
Abstract:
Properties of Portland cement mixtures with various fractions of metakaolin were studied. 10 % of Portland cement CEM I 42.5 R was replaced by different fractions of high reactivity metakaolin with defined chemical and mineralogical properties. Various fractions of metakaolin were prepared by jet mill classifying system. There is a clear trend between fineness of metakaolin and hydration heat development. Due to metakaolin presence in mixtures the compressive strength development of mortars is rather slower for coarser fractions but 28-day flexural strengths are improved for all fractions of metakaoline used in mixtures compared to reference sample of pure Portland cement. Yield point, plastic viscosity and adhesion of fresh pastes are considerably influenced by fineness of metakaolin used in cement pastes.Keywords: calorimetry, cement, metakaolin fineness, rheology, strength
Procedia PDF Downloads 4152719 Growth of Struvite Crystals in Synthetic Urine Using Magnesium Nitrate
Authors: Reneiloe Seodigeng, John Kabuba, Hilary Rutto, Tumisang Seodigeng
Abstract:
Urine diversion toilets have become popular as a means of solving the challenges in sanitation. As a result, the source-separated urine must be adequately treated so that it can be disposed of safely and valuable struvite can be extracted for use as fertilizer. In this study, synthetic urine was prepared, and struvite crystallisation experiments carried out using magnesium nitrate. The effect of residence time on crystal growth was studied. At residence time of 10, 30 and 60 minutes, mean particle sizes were 17, 34 and 53 µm showing that with higher residence times, larger crystal sizes can be achieved. SEM analysis of the crystal showed that the resultant crystals had the typical morphology of struvite crystals.Keywords: struvite, magnesium nitrate, crystallisation, urine treatment
Procedia PDF Downloads 1622718 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 242717 The Scenario Analysis of Shale Gas Development in China by Applying Natural Gas Pipeline Optimization Model
Authors: Meng Xu, Alexis K. H. Lau, Ming Xu, Bill Barron, Narges Shahraki
Abstract:
As an emerging unconventional energy, shale gas has been an economically viable step towards a cleaner energy future in U.S. China also has shale resources that are estimated to be potentially the largest in the world. In addition, China has enormous unmet for a clean alternative to substitute coal. Nonetheless, the geological complexity of China’s shale basins and issues of water scarcity potentially impose serious constraints on shale gas development in China. Further, even if China could replicate to a significant degree the U.S. shale gas boom, China faces the problem of transporting the gas efficiently overland with its limited pipeline network throughput capacity and coverage. The aim of this study is to identify the potential bottlenecks in China’s gas transmission network, as well as to examine the shale gas development affecting particular supply locations and demand centers. We examine this through application of three scenarios with projecting domestic shale gas supply by 2020: optimistic, medium and conservative shale gas supply, taking references from the International Energy Agency’s (IEA’s) projections and China’s shale gas development plans. Separately we project the gas demand at provincial level, since shale gas will have more significant impact regionally than nationally. To quantitatively assess each shale gas development scenario, we formulated a gas pipeline optimization model. We used ArcGIS to generate the connectivity parameters and pipeline segment length. Other parameters are collected from provincial “twelfth-five year” plans and “China Oil and Gas Pipeline Atlas”. The multi-objective optimization model uses GAMs and Matlab. It aims to minimize the demands that are unable to be met, while simultaneously seeking to minimize total gas supply and transmission costs. The results indicate that, even if the primary objective is to meet the projected gas demand rather than cost minimization, there’s a shortfall of 9% in meeting total demand under the medium scenario. Comparing the results between the optimistic and medium supply of shale gas scenarios, almost half of the shale gas produced in Sichuan province and Chongqing won’t be able to be transmitted out by pipeline. On the demand side, the Henan province and Shanghai gas demand gap could be filled as much as 82% and 39% respectively, with increased shale gas supply. To conclude, the pipeline network in China is currently not sufficient in meeting the projected natural gas demand in 2020 under medium and optimistic scenarios, indicating the need for substantial pipeline capacity expansion for some of the existing network, and the importance of constructing new pipelines from particular supply to demand sites. If the pipeline constraint is overcame, Beijing, Shanghai, Jiangsu and Henan’s gas demand gap could potentially be filled, and China could thereby reduce almost 25% its dependency on LNG imports under the optimistic scenario.Keywords: energy policy, energy systematic analysis, scenario analysis, shale gas in China
Procedia PDF Downloads 2892716 Cosmic Dust as Dark Matter
Authors: Thomas Prevenslik
Abstract:
Weakly Interacting Massive Particle (WIMP) experiments suggesting dark matter does not exist are consistent with the argument that the long-standing galaxy rotation problem may be resolved without the need for dark matter if the redshift measurements giving the higher than expected galaxy velocities are corrected for the redshift in cosmic dust. Because of the ubiquity of cosmic dust, all velocity measurements in astronomy based on redshift are most likely overstated, e.g., an accelerating Universe expansion need not exist if data showing supernovae brighter than expected based on the redshift/distance relation is corrected for the redshift in dust. Extensions of redshift corrections for cosmic dust to other historical astronomical observations are briefly discussed.Keywords: alternative theories, cosmic dust redshift, doppler effect, quantum mechanics, quantum electrodynamics
Procedia PDF Downloads 2982715 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 308