Search results for: testing simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7783

Search results for: testing simulation

13 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 223
12 Trajectory Optimization for Autonomous Deep Space Missions

Authors: Anne Schattel, Mitja Echim, Christof Büskens

Abstract:

Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.

Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.

Procedia PDF Downloads 412
11 Nonlinear Homogenized Continuum Approach for Determining Peak Horizontal Floor Acceleration of Old Masonry Buildings

Authors: Andreas Rudisch, Ralf Lampert, Andreas Kolbitsch

Abstract:

It is a well-known fact among the engineering community that earthquakes with comparatively low magnitudes can cause serious damage to nonstructural components (NSCs) of buildings, even when the supporting structure performs relatively well. Past research works focused mainly on NSCs of nuclear power plants and industrial plants. Particular attention should also be given to architectural façade elements of old masonry buildings (e.g. ornamental figures, balustrades, vases), which are very vulnerable under seismic excitation. Large numbers of these historical nonstructural components (HiNSCs) can be found in highly frequented historical city centers and in the event of failure, they pose a significant danger to persons. In order to estimate the vulnerability of acceleration sensitive HiNSCs, the peak horizontal floor acceleration (PHFA) is used. The PHFA depends on the dynamic characteristics of the building, the ground excitation, and induced nonlinearities. Consequently, the PHFA can not be generalized as a simple function of height. In the present research work, an extensive case study was conducted to investigate the influence of induced nonlinearity on the PHFA for old masonry buildings. Probabilistic nonlinear FE time-history analyses considering three different hazard levels were performed. A set of eighteen synthetically generated ground motions was used as input to the structure models. An elastoplastic macro-model (multiPlas) for nonlinear homogenized continuum FE-calculation was calibrated to multiple scales and applied, taking specific failure mechanisms of masonry into account. The macro-model was calibrated according to the results of specific laboratory and cyclic in situ shear tests. The nonlinear macro-model is based on the concept of multi-surface rate-independent plasticity. Material damage or crack formation are detected by reducing the initial strength after failure due to shear or tensile stress. As a result, shear forces can only be transmitted to a limited extent by friction when the cracking begins. The tensile strength is reduced to zero. The first goal of the calibration was the consistency of the load-displacement curves between experiment and simulation. The calibrated macro-model matches well with regard to the initial stiffness and the maximum horizontal load. Another goal was the correct reproduction of the observed crack image and the plastic strain activities. Again the macro-model proved to work well in this case and shows very good correlation. The results of the case study show that there is significant scatter in the absolute distribution of the PHFA between the applied ground excitations. An absolute distribution along the normalized building height was determined in the framework of probability theory. It can be observed that the extent of nonlinear behavior varies for the three hazard levels. Due to the detailed scope of the present research work, a robust comparison with code-recommendations and simplified PHFA distributions are possible. The chosen methodology offers a chance to determine the distribution of PHFA along the building height of old masonry structures. This permits a proper hazard assessment of HiNSCs under seismic loads.

Keywords: nonlinear macro-model, nonstructural components, time-history analysis, unreinforced masonry

Procedia PDF Downloads 168
10 IEEE802.15.4e Based Scheduling Mechanisms and Systems for Industrial Internet of Things

Authors: Ho-Ting Wu, Kai-Wei Ke, Bo-Yu Huang, Liang-Lin Yan, Chun-Ting Lin

Abstract:

With the advances in advanced technology, wireless sensor network (WSN) has become one of the most promising candidates to implement the wireless industrial internet of things (IIOT) architecture. However, the legacy IEEE 802.15.4 based WSN technology such as Zigbee system cannot meet the stringent QoS requirement of low powered, real-time, and highly reliable transmission imposed by the IIOT environment. Recently, the IEEE society developed IEEE 802.15.4e Time Slotted Channel Hopping (TSCH) access mode to serve this purpose. Furthermore, the IETF 6TiSCH working group has proposed standards to integrate IEEE 802.15.4e with IPv6 protocol smoothly to form a complete protocol stack for IIOT. In this work, we develop key network technologies for IEEE 802.15.4e based wireless IIoT architecture, focusing on practical design and system implementation. We realize the OpenWSN-based wireless IIOT system. The system architecture is divided into three main parts: web server, network manager, and sensor nodes. The web server provides user interface, allowing the user to view the status of sensor nodes and instruct sensor nodes to follow commands via user-friendly browser. The network manager is responsible for the establishment, maintenance, and management of scheduling and topology information. It executes centralized scheduling algorithm, sends the scheduling table to each node, as well as manages the sensing tasks of each device. Sensor nodes complete the assigned tasks and sends the sensed data. Furthermore, to prevent scheduling error due to packet loss, a schedule inspection mechanism is implemented to verify the correctness of the schedule table. In addition, when network topology changes, the system will act to generate a new schedule table based on the changed topology for ensuring the proper operation of the system. To enhance the system performance of such system, we further propose dynamic bandwidth allocation and distributed scheduling mechanisms. The developed distributed scheduling mechanism enables each individual sensor node to build, maintain and manage the dedicated link bandwidth with its parent and children nodes based on locally observed information by exchanging the Add/Delete commands via two processes. The first process, termed as the schedule initialization process, allows each sensor node pair to identify the available idle slots to allocate the basic dedicated transmission bandwidth. The second process, termed as the schedule adjustment process, enables each sensor node pair to adjust their allocated bandwidth dynamically according to the measured traffic loading. Such technology can sufficiently satisfy the dynamic bandwidth requirement in the frequently changing environments. Last but not least, we propose a packet retransmission scheme to enhance the system performance of the centralized scheduling algorithm when the packet delivery rate (PDR) is low. We propose a multi-frame retransmission mechanism to allow every single network node to resend each packet for at least the predefined number of times. The multi frame architecture is built according to the number of layers of the network topology. Performance results via simulation reveal that such retransmission scheme is able to provide sufficient high transmission reliability while maintaining low packet transmission latency. Therefore, the QoS requirement of IIoT can be achieved.

Keywords: IEEE 802.15.4e, industrial internet of things (IIOT), scheduling mechanisms, wireless sensor networks (WSN)

Procedia PDF Downloads 160
9 Development of Portable Hybrid Renewable Energy System for Sustainable Electricity Supply to Rural Communities in Nigeria

Authors: Abdulkarim Nasir, Alhassan T. Yahaya, Hauwa T. Abdulkarim, Abdussalam El-Suleiman, Yakubu K. Abubakar

Abstract:

The need for sustainable and reliable electricity supply in rural communities of Nigeria remains a pressing issue, given the country's vast energy deficit and the significant number of inhabitants lacking access to electricity. This research focuses on the development of a portable hybrid renewable energy system designed to provide a sustainable and efficient electricity supply to these underserved regions. The proposed system integrates multiple renewable energy sources, specifically solar and wind, to harness the abundant natural resources available in Nigeria. The design and development process involves the selection and optimization of components such as photovoltaic panels, wind turbines, energy storage units (batteries), and power management systems. These components are chosen based on their suitability for rural environments, cost-effectiveness, and ease of maintenance. The hybrid system is designed to be portable, allowing for easy transportation and deployment in remote locations with limited infrastructure. Key to the system's effectiveness is its hybrid nature, which ensures continuous power supply by compensating for the intermittent nature of individual renewable sources. Solar energy is harnessed during the day, while wind energy is captured whenever wind conditions are favourable, thus ensuring a more stable and reliable energy output. Energy storage units are critical in this setup, storing excess energy generated during peak production times and supplying power during periods of low renewable generation. These studies include assessing the solar irradiance, wind speed patterns, and energy consumption needs of rural communities. The simulation results inform the optimization of the system's design to maximize energy efficiency and reliability. This paper presents the development and evaluation of a 4 kW standalone hybrid system combining wind and solar power. The portable device measures approximately 8 feet 5 inches in width, 8 inches 4 inches in depth, and around 38 feet in height. It includes four solar panels with a capacity of 120 watts each, a 1.5 kW wind turbine, a solar charge controller, remote power storage, batteries, and battery control mechanisms. Designed to operate independently of the grid, this hybrid device offers versatility for use in highways and various other applications. It also presents a summary and characterization of the device, along with photovoltaic data collected in Nigeria during the month of April. The construction plan for the hybrid energy tower is outlined, which involves combining a vertical-axis wind turbine with solar panels to harness both wind and solar energy. Positioned between the roadway divider and automobiles, the tower takes advantage of the air velocity generated by passing vehicles. The solar panels are strategically mounted to deflect air toward the turbine while generating energy. Generators and gear systems attached to the turbine shaft enable power generation, offering a portable solution to energy challenges in Nigerian communities. The study also addresses the economic feasibility of the system, considering the initial investment costs, maintenance, and potential savings from reduced fossil fuel use. A comparative analysis with traditional energy supply methods highlights the long-term benefits and sustainability of the hybrid system.

Keywords: renewable energy, solar panel, wind turbine, hybrid system, generator

Procedia PDF Downloads 41
8 New Hybrid Process for Converting Small Structural Parts from Metal to CFRP

Authors: Yannick Willemin

Abstract:

Carbon fibre-reinforced plastic (CFRP) offers outstanding value. However, like all materials, CFRP also has its challenges. Many forming processes are largely manual and hard to automate, making it challenging to control repeatability and reproducibility (R&R); they generate significant scrap and are too slow for high-series production; fibre costs are relatively high and subject to supply and cost fluctuations; the supply chain is fragmented; many forms of CFRP are not recyclable, and many materials have yet to be fully characterized for accurate simulation; shelf life and outlife limitations add cost; continuous-fibre forms have design limitations; many materials are brittle; and small and/or thick parts are costly to produce and difficult to automate. A majority of small structural parts are metal due to high CFRP fabrication costs for the small-size class. The fact that CFRP manufacturing processes that produce the highest performance parts also tend to be the slowest and least automated is another reason CFRP parts are generally higher in cost than comparably performing metal parts, which are easier to produce. Fortunately, business is in the midst of a major manufacturing evolution—Industry 4.0— one technology seeing rapid growth is additive manufacturing/3D printing, thanks to new processes and materials, plus an ability to harness Industry 4.0 tools. No longer limited to just prototype parts, metal-additive technologies are used to produce tooling and mold components for high-volume manufacturing, and polymer-additive technologies can incorporate fibres to produce true composites and be used to produce end-use parts with high aesthetics, unmatched complexity, mass customization opportunities, and high mechanical performance. A new hybrid manufacturing process combines the best capabilities of additive—high complexity, low energy usage and waste, 100% traceability, faster to market—and post-consolidation—tight tolerances, high R&R, established materials, and supply chains—technologies. The platform was developed by Zürich-based 9T Labs AG and is called Additive Fusion Technology (AFT). It consists of a design software offering the possibility to determine optimal fibre layup, then exports files back to check predicted performance—plus two pieces of equipment: a 3d-printer—which lays up (near)-net-shape preforms using neat thermoplastic filaments and slit, roll-formed unidirectional carbon fibre-reinforced thermoplastic tapes—and a post-consolidation module—which consolidates then shapes preforms into final parts using a compact compression press fitted with a heating unit and matched metal molds. Matrices—currently including PEKK, PEEK, PA12, and PPS, although nearly any high-quality commercial thermoplastic tapes and filaments can be used—are matched between filaments and tapes to assure excellent bonding. Since thermoplastics are used exclusively, larger assemblies can be produced by bonding or welding together smaller components, and end-of-life parts can be recycled. By combining compression molding with 3D printing, higher part quality with very-low voids and excellent surface finish on A and B sides can be produced. Tight tolerances (min. section thickness=1.5mm, min. section height=0.6mm, min. fibre radius=1.5mm) with high R&R can be cost-competitively held in production volumes of 100 to 10,000 parts/year on a single set of machines.

Keywords: additive manufacturing, composites, thermoplastic, hybrid manufacturing

Procedia PDF Downloads 96
7 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop

Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen

Abstract:

Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.

Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.

Procedia PDF Downloads 41
6 Developing a Framework for Sustainable Social Housing Delivery in Greater Port Harcourt City Rivers State, Nigeria

Authors: Enwin Anthony Dornubari, Visigah Kpobari Peter

Abstract:

This research has developed a framework for the provision of sustainable and affordable housing to accommodate the low-income population of Greater Port Harcourt City. The objectives of this study among others, were to: examine UN-Habitat guidelines for acceptable and sustainable social housing provision, describe past efforts of the Rivers State Government and the Federal Government of Nigeria to provide housing for the poor in the Greater Port Harcourt City area; obtain a profile of prospective beneficiaries of the social housing proposed by this research as well as perceptions of their present living conditions, and living in the proposed self-sustaining social housing development, based on the initial simulation of the proposal; describe the nature of the framework, guideline and management of the proposed social housing development and explain the modalities for its implementation. The study utilized the mixed methods research approach, aimed at triangulating findings from the quantitative and qualitative paradigms. Opinions of professional of the built environment; Director, Development Control, Greater Port Harcourt City Development Authority; Directors of Ministry of Urban Development and Physical Planning; Housing and Property Development Authority and managers of selected Primary Mortgage Institutions were sought and analyzed. There were four target populations for the study, namely: members of occupational sub-groups for FGDs (Focused Group Discussions); development professionals for KIIs (Key Informant Interviews), household heads in selected communities of GPHC; and relevant public officials for IDI (Individual Depth Interview). Focus Group Discussions (FGDs) were held with members of occupational sub-groups in each of the eight selected communities (Fisherfolk). The table shows that there were forty (40) members across all occupational sub-groups in each selected community, yielding a total of 320 in the eight (8) communities of Mgbundukwu (Mile 2 Diobu), Rumuodomaya, Abara (Etche), Igwuruta-Ali(Ikwerre), Wakama(Ogu-Bolo), Okujagu (Okrika), Akpajo (Eleme), and Okoloma (Oyigbo). For key informant interviews, two (2) members were judgmentally selected from each of the following development professions: urban and regional planners; architects; estate surveyors; land surveyors; quantity surveyors; and engineers. Concerning Population 3-Household Heads in Selected Communities of GPHC, a stratified multi-stage sampling procedure was adopted: Stage 1-Obtaining a 10% (a priori decision) sample of the component communities of GPHC in each stratum. The number in each stratum was rounded to one whole number to ensure representation of each stratum. Stage 2-Obtaining the number of households to be studied after applying the Taro Yamane formula, which aided in determining the appropriate number of cases to be studied at the precision level of 5%. Findings revealed, amongst others, that poor implementation of the UN-Habitat global shelter strategy, lack of stakeholder engagement, inappropriate locations, undue bureaucracy, lack of housing fairness and equity and high cost of land and building materials were the reasons for the failure of past efforts towards social housing provision in the Greater Port Harcourt City area. The study recommended a public-private partnership approach for the implementation and management of the framework. It also recommended a robust and sustained relationship between the management of the framework and the UN-Habitat office and other relevant government agencies responsible for housing development and all investment partners to create trust and efficiency.

Keywords: development, framework, low-income, sustainable, social housing

Procedia PDF Downloads 249
5 Computational Fluid Dynamics Simulation of a Nanofluid-Based Annular Solar Collector with Different Metallic Nano-Particles

Authors: Sireetorn Kuharat, Anwar Beg

Abstract:

Motivation- Solar energy constitutes the most promising renewable energy source on earth. Nanofluids are a very successful family of engineered fluids, which contain well-dispersed nanoparticles suspended in a stable base fluid. The presence of metallic nanoparticles (e.g. gold, silver, copper, aluminum etc) significantly improves the thermo-physical properties of the host fluid and generally results in a considerable boost in thermal conductivity, density, and viscosity of nanofluid compared with the original base (host) fluid. This modification in fundamental thermal properties has profound implications in influencing the convective heat transfer process in solar collectors. The potential for improving solar collector direct absorber efficiency is immense and to gain a deeper insight into the impact of different metallic nanoparticles on efficiency and temperature enhancement, in the present work, we describe recent computational fluid dynamics simulations of an annular solar collector system. The present work studies several different metallic nano-particles and compares their performance. Methodologies- A numerical study of convective heat transfer in an annular pipe solar collector system is conducted. The inner tube contains pure water and the annular region contains nanofluid. Three-dimensional steady-state incompressible laminar flow comprising water- (and other) based nanofluid containing a variety of metallic nanoparticles (copper oxide, aluminum oxide, and titanium oxide nanoparticles) is examined. The Tiwari-Das model is deployed for which thermal conductivity, specific heat capacity and viscosity of the nanofluid suspensions is evaluated as a function of solid nano-particle volume fraction. Radiative heat transfer is also incorporated using the ANSYS solar flux and Rosseland radiative models. The ANSYS FLUENT finite volume code (version 18.1) is employed to simulate the thermo-fluid characteristics via the SIMPLE algorithm. Mesh-independence tests are conducted. Validation of the simulations is also performed with a computational Harlow-Welch MAC (Marker and Cell) finite difference method and excellent correlation achieved. The influence of volume fraction on temperature, velocity, pressure contours is computed and visualized. Main findings- The best overall performance is achieved with copper oxide nanoparticles. Thermal enhancement is generally maximized when water is utilized as the base fluid, although in certain cases ethylene glycol also performs very efficiently. Increasing nanoparticle solid volume fraction elevates temperatures although the effects are less prominent in aluminum and titanium oxide nanofluids. Significant improvement in temperature distributions is achieved with copper oxide nanofluid and this is attributed to the superior thermal conductivity of copper compared to other metallic nano-particles studied. Important fluid dynamic characteristics are also visualized including circulation and temperature shoots near the upper region of the annulus. Radiative flux is observed to enhance temperatures significantly via energization of the nanofluid although again the best elevation in performance is attained consistently with copper oxide. Conclusions-The current study generalizes previous investigations by considering multiple metallic nano-particles and furthermore provides a good benchmark against which to calibrate experimental tests on a new solar collector configuration currently being designed at Salford University. Important insights into the thermal conductivity and viscosity with metallic nano-particles is also provided in detail. The analysis is also extendable to other metallic nano-particles including gold and zinc.

Keywords: heat transfer, annular nanofluid solar collector, ANSYS FLUENT, metallic nanoparticles

Procedia PDF Downloads 143
4 Gamification Beyond Competition: the Case of DPG Lab Collaborative Learning Program for High-School Girls by GameLab KBTU and UNICEF in Kazakhstan

Authors: Nazym Zhumabayeva, Aleksandr Mezin, Alexandra Knysheva

Abstract:

Women's underrepresentation in STEM is critical, worsened by ineffective engagement in educational practices. UNICEF Kazakhstan and GameLab KBTU's collaborative initiatives aim to enhance female STEM participation by fostering an inclusive environment. Learning from LEVEL UP's 2023 program, which featured a hackathon, the 2024 strategy pivots towards non-competitive gamification. Although the data from last year's project showed higher than average student engagement, observations and in-depth interviews with participants showed that the format was stressful for the girls, making them focus on points rather than on other values. This study presents a gamified educational system, DPG Lab, aimed at incentivizing young women's participation in STEM through the development of digital public goods (DPGs). By prioritizing collaborative gamification elements, the project seeks to create an inclusive learning environment that increases engagement and interest in STEM among young women. The DPG Lab aims to find a solution to minimize competition and support collaboration. The project is designed to motivate female participants towards the development of digital solutions through an introduction to the concept of DPGs. It consists of a short online course, a simulation videogame, and a real-time online quest with an offline finale at the KBTU campus. The online course offers short video lectures on open-source development and DPG standards. The game facilitates the practical application of theoretical knowledge, enriching the learning experience. Learners can also participate in a quest that encourages participants to develop DPG ideas in teams by choosing missions throughout the quest path. At the offline quest finale, the participants will meet in person to exchange experiences and accomplishments without engaging in comparative assessments: the quest ensures that each team’s trajectory is distinct by design. This marks a shift from competitive hackathons to a collaborative format, recognizing the unique contributions and achievements of each participant. The pilot batch of students is scheduled to commence in April 2024, with the finale anticipated in June. It is projected that this group will comprise 50 female high-school students from various regions across Kazakhstan. Expected outcomes include increased engagement and interest in STEM fields among young female participants, positive emotional and psychological impact through an emphasis on collaborative learning environments, and improved understanding and skills in DPG development. GameLab KBTU intends to undertake a hypothesis evaluation, employing a methodology similar to that utilized in the preceding LEVEL UP project. This approach will encompass the compilation of quantitative metrics (conversion funnels, test results, and surveys) and qualitative data from in-depth interviews and observational studies. For comparative analysis, a select group of participants from the previous year's project will be recruited to engage in the DPG Lab. By developing and implementing a gamified framework that emphasizes inclusion, engagement, and collaboration, the study seeks to provide practical knowledge about effective gamification strategies for promoting gender diversity in STEM. The expected outcomes of this initiative can contribute to the broader discussion on gamification in education and gender equality in STEM by offering a replicable and scalable model for similar interventions around the world.

Keywords: collaborative learning, competitive learning, digital public goods, educational gamification, emerging regions, STEM, underprivileged groups

Procedia PDF Downloads 62
3 Characterization of Aluminosilicates and Verification of Their Impact on Quality of Ceramic Proppants Intended for Shale Gas Output

Authors: Joanna Szymanska, Paulina Wawulska-Marek, Jaroslaw Mizera

Abstract:

Nowadays, the rapid growth of global energy consumption and uncontrolled depletion of natural resources become a serious problem. Shale rocks are the largest and potential global basins containing hydrocarbons, trapped in closed pores of the shale matrix. Regardless of the shales origin, mining conditions are extremely unfavourable due to high reservoir pressure, great depths, increased clay minerals content and limited permeability (nanoDarcy) of the rocks. Taking into consideration such geomechanical barriers, effective extraction of natural gas from shales with plastic zones demands effective operations. Actually, hydraulic fracturing is the most developed technique based on the injection of pressurized fluid into a wellbore, to initiate fractures propagation. However, a rapid drop of pressure after fluid suction to the ground induces a fracture closure and conductivity reduction. In order to minimize this risk, proppants should be applied. They are solid granules transported with hydraulic fluids to locate inside the rock. Proppants act as a prop for the closing fracture, thus gas migration to a borehole is effective. Quartz sands are commonly applied proppants only at shallow deposits (USA). Whereas, ceramic proppants are designed to meet rigorous downhole conditions to intensify output. Ceramic granules predominate with higher mechanical strength, stability in strong acidic environment, spherical shape and homogeneity as well. Quality of ceramic proppants is conditioned by raw materials selection. Aim of this study was to obtain the proppants from aluminosilicates (the kaolinite subgroup) and mix of minerals with a high alumina content. These loamy minerals contain a tubular and platy morphology that improves mechanical properties and reduces their specific weight. Moreover, they are distinguished by well-developed surface area, high porosity, fine particle size, superb dispersion and nontoxic properties - very crucial for particles consolidation into spherical and crush-resistant granules in mechanical granulation process. The aluminosilicates were mixed with water and natural organic binder to improve liquid-bridges and pores formation between particles. Afterward, the green proppants were subjected to sintering at high temperatures. Evaluation of the minerals utility was based on their particle size distribution (laser diffraction study) and thermal stability (thermogravimetry). Scanning Electron Microscopy was useful for morphology and shape identification combined with specific surface area measurement (BET). Chemical composition was verified by Energy Dispersive Spectroscopy and X-ray Fluorescence. Moreover, bulk density and specific weight were measured. Such comprehensive characterization of loamy materials confirmed their favourable impact on the proppants granulation. The sintered granules were analyzed by SEM to verify the surface topography and phase transitions after sintering. Pores distribution was identified by X-Ray Tomography. This method enabled also the simulation of proppants settlement in a fracture, while measurement of bulk density was essential to predict their amount to fill a well. Roundness coefficient was also evaluated, whereas impact on mining environment was identified by turbidity and solubility in acid - to indicate risk of the material decay in a well. The obtained outcomes confirmed a positive influence of the loamy minerals on ceramic proppants properties with respect to the strict norms. This research is perspective for higher quality proppants production with costs reduction.

Keywords: aluminosilicates, ceramic proppants, mechanical granulation, shale gas

Procedia PDF Downloads 163
2 An Integrated Multisensor/Modeling Approach Addressing Climate Related Extreme Events

Authors: H. M. El-Askary, S. A. Abd El-Mawla, M. Allali, M. M. El-Hattab, M. El-Raey, A. M. Farahat, M. Kafatos, S. Nickovic, S. K. Park, A. K. Prasad, C. Rakovski, W. Sprigg, D. Struppa, A. Vukovic

Abstract:

A clear distinction between weather and climate is a necessity because while they are closely related, there are still important differences. Climate change is identified when we compute the statistics of the observed changes in weather over space and time. In this work we will show how the changing climate contribute to the frequency, magnitude and extent of different extreme events using a multi sensor approach with some synergistic modeling activities. We are exploring satellite observations of dust over North Africa, Gulf Region and the Indo Gangetic basin as well as dust versus anthropogenic pollution events over the Delta region in Egypt and Seoul through remote sensing and utilize the behavior of the dust and haze on the aerosol optical properties. Dust impact on the retreat of the glaciers in the Himalayas is also presented. In this study we also focus on the identification and monitoring of a massive dust plume that blew off the western coast of Africa towards the Atlantic on October 8th, 2012 right before the development of Hurricane Sandy. There is evidence that dust aerosols played a non-trivial role in the cyclogenesis process of Sandy. Moreover, a special dust event "An American Haboob" in Arizona is discussed as it was predicted hours in advance because of the great improvement we have in numerical, land–atmosphere modeling, computing power and remote sensing of dust events. Therefore we performed a full numerical simulation to that event using the coupled atmospheric-dust model NMME–DREAM after generating a mask of the potentially dust productive regions using land cover and vegetation data obtained from satellites. Climate change also contributes to the deterioration of different marine habitats. In that regard we are also presenting some work dealing with change detection analysis of Marine Habitats over the city of Hurghada, Red Sea, Egypt. The motivation for this work came from the fact that coral reefs at Hurghada have undergone significant decline. They are damaged, displaced, polluted, stepped on, and blasted off, in addition to the effects of climate change on the reefs. One of the most pressing issues affecting reef health is mass coral bleaching that result from an interaction between human activities and climatic changes. Over another location, namely California, we have observed that it exhibits highly-variable amounts of precipitation across many timescales, from the hourly to the climate timescale. Frequently, heavy precipitation occurs, causing damage to property and life (floods, landslides, etc.). These extreme events, variability, and the lack of good, medium to long-range predictability of precipitation are already a challenge to those who manage wetlands, coastal infrastructure, agriculture and fresh water supply. Adding on to the current challenges for long-range planning is climate change issue. It is known that La Niña and El Niño affect precipitation patterns, which in turn are entwined with global climate patterns. We have studied ENSO impact on precipitation variability over different climate divisions in California. On the other hand the Nile Delta has experienced lately an increase in the underground water table as well as water logging, bogging and soil salinization. Those impacts would pose a major threat to the Delta region inheritance and existing communities. There has been an undergoing effort to address those vulnerabilities by looking into many adaptation strategies.

Keywords: remote sensing, modeling, long range transport, dust storms, North Africa, Gulf Region, India, California, climate extremes, sea level rise, coral reefs

Procedia PDF Downloads 488
1 A Comprehensive Study of Spread Models of Wildland Fires

Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.

Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling

Procedia PDF Downloads 81