Search results for: simulation modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8098

Search results for: simulation modeling

28 An Analysis of Economical Drivers and Technical Challenges for Large-Scale Biohydrogen Deployment

Authors: Rouzbeh Jafari, Joe Nava

Abstract:

This study includes learnings from an engineering practice normally performed on large scale biohydrogen processes. If properly scale-up is done, biohydrogen can be a reliable pathway for biowaste valorization. Most of the studies on biohydrogen process development have used model feedstock to investigate process key performance indicators (KPIs). This study does not intend to compare different technologies with model feedstock. However, it reports economic drivers and technical challenges which help in developing a road map for expanding biohydrogen economy deployment in Canada. BBA is a consulting firm responsible for the design of hydrogen production projects. Through executing these projects, activity has been performed to identify, register and mitigate technical drawbacks of large-scale hydrogen production. Those learnings, in this study, have been applied to the biohydrogen process. Through data collected by a comprehensive literature review, a base case has been considered as a reference, and several case studies have been performed. Critical parameters of the process were identified and through common engineering practice (process design, simulation, cost estimate, and life cycle assessment) impact of these parameters on the commercialization risk matrix and class 5 cost estimations were reported. The process considered in this study is food waste and woody biomass dark fermentation. To propose a reliable road map to develop a sustainable biohydrogen production process impact of critical parameters was studied on the end-to-end process. These parameters were 1) feedstock composition, 2) feedstock pre-treatment, 3) unit operation selection, and 4) multi-product concept. A couple of emerging technologies also were assessed such as photo-fermentation, integrated dark fermentation, and using ultrasound and microwave to break-down feedstock`s complex matrix and increase overall hydrogen yield. To properly report the impact of each parameter KPIs were identified as 1) Hydrogen yield, 2) energy consumption, 3) secondary waste generated, 4) CO2 footprint, 5) Product profile, 6) $/kg-H2 and 5) environmental impact. The feedstock is the main parameter defining the economic viability of biohydrogen production. Through parametric studies, it was found that biohydrogen production favors feedstock with higher carbohydrates. The feedstock composition was varied, by increasing one critical element (such as carbohydrate) and monitoring KPIs evolution. Different cases were studied with diverse feedstock, such as energy crops, wastewater slug, and lignocellulosic waste. The base case process was applied to have reference KPIs values and modifications such as pretreatment and feedstock mix-and-match were implemented to investigate KPIs changes. The complexity of the feedstock is the main bottleneck in the successful commercial deployment of the biohydrogen process as a reliable pathway for waste valorization. Hydrogen yield, reaction kinetics, and performance of key unit operations highly impacted as feedstock composition fluctuates during the lifetime of the process or from one case to another. In this case, concept of multi-product becomes more reliable. In this concept, the process is not designed to produce only one target product such as biohydrogen but will have two or multiple products (biohydrogen and biomethane or biochemicals). This new approach is being investigated by the BBA team and the results will be shared in another scientific contribution.

Keywords: biohydrogen, process scale-up, economic evaluation, commercialization uncertainties, hydrogen economy

Procedia PDF Downloads 108
27 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling

Authors: Danlei Yang, Luofeng Huang

Abstract:

The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.

Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence

Procedia PDF Downloads 4
26 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette

Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida

Abstract:

Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.

Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry

Procedia PDF Downloads 47
25 Intensification of Wet Air Oxidation of Landfill Leachate Reverse Osmosis Concentrates

Authors: Emilie Gout, Mathias Monnot, Olivier Boutin, Pierre Vanloot, Philippe Moulin

Abstract:

Water is a precious resource. Treating industrial wastewater remains a considerable technical challenge of our century. The effluent considered for this study is landfill leachate treated by reverse osmosis (RO). Nowadays, in most developed countries, sanitary landfilling is the main method to deal with municipal solid waste. Rainwater percolates through solid waste, generating leachates mostly comprised of organic and inorganic matter. Whilst leachate ages, its composition varies, becoming more and more bio-refractory. RO is already used for landfill leachates as it generates good quality permeate. However, its mains drawback is the production of highly polluted concentrates that cannot be discharged in the environment or reused, which is an important industrial issue. It is against this background that the study of coupling RO with wet air oxidation (WAO) was set to intensify and optimize processes to meet current regulations for water discharge in the environment. WAO is widely studied for effluents containing bio-refractory compounds. Oxidation consists of a destruction reaction capable of mineralizing the recalcitrant organic fraction of pollution into carbon dioxide and water when complete. WAO process in subcritical conditions requires a high-energy consumption, but it can be autothermic in a certain range of chemical oxygen demand (COD) concentrations (10-100 g.L⁻¹). Appropriate COD concentrations are reached in landfill leachate RO concentrates. Therefore, the purpose of this work is to report the performances of mineralization during WAO on RO concentrates. The coupling of RO/WAO has shown promising results in previous works on both synthetic and real effluents in terms of organic carbon (TOC) reduction by WAO and retention by RO. Non-catalytic WAO with air as oxidizer was performed in a lab-scale stirred autoclave (1 L) on landfill leachates RO concentrates collected in different seasons in a sanitary landfill in southern France. The yield of WAO depends on operating parameters such as total pressure, temperature, and time. Compositions of the effluent are also important aspects for process intensification. An experimental design methodology was used to minimize the number of experiments whilst finding the operating conditions achieving the best pollution reduction. The simulation led to a set of 18 experiments, and the responses to highlight process efficiency are pH, conductivity, turbidity, COD, TOC, and inorganic carbon. A 70% oxygen excess was chosen for all the experiments. First experiments showed that COD and TOC abatements of at least 70% were obtained after 90 min at 300°C and 20 MPa, which attested the possibility to treat RO leachate concentrates with WAO. In order to meet French regulations and validate process intensification with industrial effluents, some continuous experiments in a bubble column are foreseen, and some further analyses will be performed, such as biological oxygen demand and study of gas composition. Meanwhile, other industrial effluents are treated to compare RO-WAO performances. These effluents, coming from pharmaceutical, petrochemical, and tertiary wastewater industries, present different specific pollutants that will provide a better comprehension of the hybrid process and prove the intensification and feasibility of the process at an industrial scale. Acknowledgments: This work has been supported by the French National Research Agency (ANR) for the Project TEMPO under the reference number ANR-19-CE04-0002-01.

Keywords: hybrid process, landfill leachates, process intensification, reverse osmosis, wet air oxidation

Procedia PDF Downloads 136
24 Aeroelastic Stability Analysis in Turbomachinery Using Reduced Order Aeroelastic Model Tool

Authors: Chandra Shekhar Prasad, Ludek Pesek Prasad

Abstract:

In the present day fan blade of aero engine, turboprop propellers, gas turbine or steam turbine low-pressure blades are getting bigger, lighter and thus, become more flexible. Therefore, flutter, forced blade response and vibration related failure of the high aspect ratio blade are of main concern for the designers, thus need to be address properly in order to achieve successful component design. At the preliminary design stage large number of design iteration is need to achieve the utter free safe design. Most of the numerical method used for aeroelastic analysis is based on field-based methods such as finite difference method, finite element method, finite volume method or coupled. These numerical schemes are used to solve the coupled fluid Flow-Structural equation based on full Naiver-Stokes (NS) along with structural mechanics’ equations. These type of schemes provides very accurate results if modeled properly, however, they are computationally very expensive and need large computing recourse along with good personal expertise. Therefore, it is not the first choice for aeroelastic analysis during preliminary design phase. A reduced order aeroelastic model (ROAM) with acceptable accuracy and fast execution is more demanded at this stage. Similar ROAM are being used by other researchers for aeroelastic and force response analysis of turbomachinery. In the present paper new medium fidelity ROAM is successfully developed and implemented in numerical tool to simulated the aeroelastic stability phenomena in turbomachinery and well as flexible wings. In the present, a hybrid flow solver based on 3D viscous-inviscid coupled 3D panel method (PM) and 3d discrete vortex particle method (DVM) is developed, viscous parameters are estimated using boundary layer(BL) approach. This method can simulate flow separation and is a good compromise between accuracy and speed compared to CFD. In the second phase of the research work, the flow solver (PM) will be coupled with ROM non-linear beam element method (BEM) based FEM structural solver (with multibody capabilities) to perform the complete aeroelastic simulation of a steam turbine bladed disk, propellers, fan blades, aircraft wing etc. The partitioned based coupling approach is used for fluid-structure interaction (FSI). The numerical results are compared with experimental data for different test cases and for the blade cascade test case, experimental data is obtained from in-house lab experiments at IT CAS. Furthermore, the results from the new aeroelastic model will be compared with classical CFD-CSD based aeroelastic models. The proposed methodology for the aeroelastic stability analysis of gas turbine or steam turbine blades, or propellers or fan blades will provide researchers and engineers a fast, cost-effective and efficient tool for aeroelastic (classical flutter) analysis for different design at preliminary design stage where large numbers of design iteration are required in short time frame.

Keywords: aeroelasticity, beam element method (BEM), discrete vortex particle method (DVM), classical flutter, fluid-structure interaction (FSI), panel method, reduce order aeroelastic model (ROAM), turbomachinery, viscous-inviscid coupling

Procedia PDF Downloads 264
23 High Pressure Thermophysical Properties of Complex Mixtures Relevant to Liquefied Natural Gas (LNG) Processing

Authors: Saif Al Ghafri, Thomas Hughes, Armand Karimi, Kumarini Seneviratne, Jordan Oakley, Michael Johns, Eric F. May

Abstract:

Knowledge of the thermophysical properties of complex mixtures at extreme conditions of pressure and temperature have always been essential to the Liquefied Natural Gas (LNG) industry’s evolution because of the tremendous technical challenges present at all stages in the supply chain from production to liquefaction to transport. Each stage is designed using predictions of the mixture’s properties, such as density, viscosity, surface tension, heat capacity and phase behaviour as a function of temperature, pressure, and composition. Unfortunately, currently available models lead to equipment over-designs of 15% or more. To achieve better designs that work more effectively and/or over a wider range of conditions, new fundamental property data are essential, both to resolve discrepancies in our current predictive capabilities and to extend them to the higher-pressure conditions characteristic of many new gas fields. Furthermore, innovative experimental techniques are required to measure different thermophysical properties at high pressures and over a wide range of temperatures, including near the mixture’s critical points where gas and liquid become indistinguishable and most existing predictive fluid property models used breakdown. In this work, we present a wide range of experimental measurements made for different binary and ternary mixtures relevant to LNG processing, with a particular focus on viscosity, surface tension, heat capacity, bubble-points and density. For this purpose, customized and specialized apparatus were designed and validated over the temperature range (200 to 423) K at pressures to 35 MPa. The mixtures studied were (CH4 + C3H8), (CH4 + C3H8 + CO2) and (CH4 + C3H8 + C7H16); in the last of these the heptane contents was up to 10 mol %. Viscosity was measured using a vibrating wire apparatus, while mixture densities were obtained by means of a high-pressure magnetic-suspension densimeter and an isochoric cell apparatus; the latter was also used to determine bubble-points. Surface tensions were measured using the capillary rise method in a visual cell, which also enabled the location of the mixture critical point to be determined from observations of critical opalescence. Mixture heat capacities were measured using a customised high-pressure differential scanning calorimeter (DSC). The combined standard relative uncertainties were less than 0.3% for density, 2% for viscosity, 3% for heat capacity and 3 % for surface tension. The extensive experimental data gathered in this work were compared with a variety of different advanced engineering models frequently used for predicting thermophysical properties of mixtures relevant to LNG processing. In many cases the discrepancies between the predictions of different engineering models for these mixtures was large, and the high quality data allowed erroneous but often widely-used models to be identified. The data enable the development of new or improved models, to be implemented in process simulation software, so that the fluid properties needed for equipment and process design can be predicted reliably. This in turn will enable reduced capital and operational expenditure by the LNG industry. The current work also aided the community of scientists working to advance theoretical descriptions of fluid properties by allowing to identify deficiencies in theoretical descriptions and calculations.

Keywords: LNG, thermophysical, viscosity, density, surface tension, heat capacity, bubble points, models

Procedia PDF Downloads 273
22 From Linear to Circular Model: An Artificial Intelligence-Powered Approach in Fosso Imperatore

Authors: Carlotta D’Alessandro, Giuseppe Ioppolo, Katarzyna Szopik-Depczyńska

Abstract:

— The growing scarcity of resources and the mounting pressures of climate change, water pollution, and chemical contamination have prompted societies, governments, and businesses to seek ways to minimize their environmental impact. To combat climate change, and foster sustainability, Industrial Symbiosis (IS) offers a powerful approach, facilitating the shift toward a circular economic model. IS has gained prominence in the European Union's policy framework as crucial enabler of resource efficiency and circular economy practices. The essence of IS lies in the collaborative sharing of resources such as energy, material by-products, waste, and water, thanks to geographic proximity. It can be exemplified by eco-industrial parks (EIPs), which are natural environments for boosting cooperation and resource sharing between businesses. EIPs are characterized by group of businesses situated in proximity, connected by a network of both cooperative and competitive interactions. They represent a sustainable industrial model aimed at reducing resource use, waste, and environmental impact while fostering economic and social wellbeing. IS, combined with Artificial Intelligence (AI)-driven technologies, can further optimize resource sharing and efficiency within EIPs. This research, supported by the “CE_IPs” project, aims to analyze the potential for IS and AI, in advancing circularity and sustainability at Fosso Imperatore. The Fosso Imperatore Industrial Park in Nocera Inferiore, Italy, specializes in agriculture and the industrial transformation of agricultural products, particularly tomatoes, tobacco, and textile fibers. This unique industrial cluster, centered around tomato cultivation and processing, also includes mechanical engineering enterprises and agricultural packaging firms. To stimulate the shift from a traditional to a circular economic model, an AI-powered Local Development Plan (LDP) is developed for Fosso Imperatore. It can leverage data analytics, predictive modeling, and stakeholder engagement to optimize resource utilization, reduce waste, and promote sustainable industrial practices. A comprehensive SWOT analysis of the AI-powered LDP revealed several key factors influencing its potential success and challenges. Among the notable strengths and opportunities arising from AI implementation are reduced processing times, fewer human errors, and increased revenue generation. Furthermore, predictive analytics minimize downtime, bolster productivity, and elevate quality while mitigating workplace hazards. However, the integration of AI also presents potential weaknesses and threats, including significant financial investment, since implementing and maintaining AI systems can be costly. The widespread adoption of AI could lead to job losses in certain sectors. Lastly, AI systems are susceptible to cyberattacks, posing risks to data security and operational continuity. Moreover, an Analytic Hierarchy Process (AHP) analysis was employed to yield a prioritized ranking of the outlined AI-driven LDP practices based on the stakeholder input, ensuring a more comprehensive and representative understanding of their relative significance for achieving sustainability in Fosso Imperatore Industrial Park. While this study provides valuable insights into the potential of AIpowered LDP at the Fosso Imperatore, it is important to note that the findings may not be directly applicable to all industrial parks, particularly those with different sizes, geographic locations, or industry compositions. Additional study is necessary to scrutinize the generalizability of these results and to identify best practices for implementing AI-driven LDP in diverse contexts.

Keywords: artificial intelligence, climate change, Fosso Imperatore, industrial park, industrial symbiosis

Procedia PDF Downloads 23
21 Introducing Transport Engineering through Blended Learning Initiatives

Authors: Kasun P. Wijayaratna, Lauren Gardner, Taha Hossein Rashidi

Abstract:

Undergraduate students entering university across the last 2 to 3 years tend to be born during the middle years of the 1990s. This generation of students has been exposed to the internet and the desire and dependency on technology since childhood. Brains develop based on environmental influences and technology has wired this generation of student to be attuned to sophisticated complex visual imagery, indicating visual forms of learning may be more effective than the traditional lecture or discussion formats. Furthermore, post-millennials perspectives on career are not focused solely on stability and income but are strongly driven by interest, entrepreneurship and innovation. Accordingly, it is important for educators to acknowledge the generational shift and tailor the delivery of learning material to meet the expectations of the students and the needs of industry. In the context of transport engineering, effectively teaching undergraduate students the basic principles of transport planning, traffic engineering and highway design is fundamental to the progression of the profession from a practice and research perspective. Recent developments in technology have transformed the discipline as practitioners and researchers move away from the traditional “pen and paper” approach to methods involving the use of computer programs and simulation. Further, enhanced accessibility of technology for students has changed the way they understand and learn material being delivered at tertiary education institutions. As a consequence, blended learning approaches, which aim to integrate face to face teaching with flexible self-paced learning resources, have become prevalent to provide scalable education that satisfies the expectations of students. This research study involved the development of a series of ‘Blended Learning’ initiatives implemented within an introductory transport planning and geometric design course, CVEN2401: Sustainable Transport and Highway Engineering, taught at the University of New South Wales, Australia. CVEN2401 was modified by conducting interactive polling exercises during lectures, including weekly online quizzes, offering a series of supplementary learning videos, and implementing a realistic design project that students needed to complete using modelling software that is widely used in practice. These activities and resources were aimed to improve the learning environment for a large class size in excess of 450 students and to ensure that practical industry valued skills were introduced. The case study compared the 2016 and 2017 student cohorts based on their performance across assessment tasks as well as their reception to the material revealed through student feedback surveys. The initiatives were well received with a number of students commenting on the ability to complete self-paced learning and an appreciation of the exposure to a realistic design project. From an educator’s perspective, blending the course made it feasible to interact and engage with students. Personalised learning opportunities were made available whilst delivering a considerable volume of complex content essential for all undergraduate Civil and Environmental Engineering students. Overall, this case study highlights the value of blended learning initiatives, especially in the context of large class size university courses.

Keywords: blended learning, highway design, teaching, transport planning

Procedia PDF Downloads 148
20 Thermal Characterisation of Multi-Coated Lightweight Brake Rotors for Passenger Cars

Authors: Ankit Khurana

Abstract:

The sufficient heat storage capacity or ability to dissipate heat is the most decisive parameter to have an effective and efficient functioning of Friction-based Brake Disc systems. The primary aim of the research was to analyse the effect of multiple coatings on lightweight disk rotors surface which not only alleviates the mass of vehicle & also, augments heat transfer. This research is projected to aid the automobile fraternity with an enunciated view over the thermal aspects in a braking system. The results of the project indicate that with the advent of modern coating technologies a brake system’s thermal curtailments can be removed and together with forced convection, heat transfer processes can see a drastic improvement leading to increased lifetime of the brake rotor. Other advantages of modifying the surface of a lightweight rotor substrate will be to reduce the overall weight of the vehicle, decrease the risk of thermal brake failure (brake fade and fluid vaporization), longer component life, as well as lower noise and vibration characteristics. A mathematical model was constructed in MATLAB which encompassing the various thermal characteristics of the proposed coatings and substrate materials required to approximate the heat flux values in a free and forced convection environment; resembling to a real-time braking phenomenon which could easily be modelled into a full cum scaled version of the alloy brake rotor part in ABAQUS. The finite element of a brake rotor was modelled in a constrained environment such that the nodal temperature between the contact surfaces of the coatings and substrate (Wrought Aluminum alloy) resemble an amalgamated solid brake rotor element. The initial results obtained were for a Plasma Electrolytic Oxidized (PEO) substrate wherein the Aluminum alloy gets a hard ceramic oxide layer grown on its transitional phase. The rotor was modelled and then evaluated in real-time for a constant ‘g’ braking event (based upon the mathematical heat flux input and convective surroundings), which reflected the necessity to deposit a conducting coat (sacrificial) above the PEO layer in order to inhibit thermal degradation of the barrier coating prematurely. Taguchi study was then used to bring out certain critical factors which may influence the maximum operating temperature of a multi-coated brake disc by simulating brake tests: a) an Alpine descent lasting 50 seconds; b) an Autobahn stop lasting 3.53 seconds; c) a Six–high speed repeated stop in accordance to FMVSS 135 lasting 46.25 seconds. Thermal Barrier coating thickness and Vane heat transfer coefficient were the two most influential factors and owing to their design and manufacturing constraints a final optimized model was obtained which survived the 6-high speed stop test as per the FMVSS -135 specifications. The simulation data highlighted the merits for preferring Wrought Aluminum alloy 7068 over Grey Cast Iron and Aluminum Metal Matrix Composite in coherence with the multiple coating depositions.

Keywords: lightweight brakes, surface modification, simulated braking, PEO, aluminum

Procedia PDF Downloads 407
19 Membrane Permeability of Middle Molecules: A Computational Chemistry Approach

Authors: Sundaram Arulmozhiraja, Kanade Shimizu, Yuta Yamamoto, Satoshi Ichikawa, Maenaka Katsumi, Hiroaki Tokiwa

Abstract:

Drug discovery is shifting from small molecule based drugs targeting local active site to middle molecules (MM) targeting large, flat, and groove-shaped binding sites, for example, protein-protein interface because at least half of all targets assumed to be involved in human disease have been classified as “difficult to drug” with traditional small molecules. Hence, MMs such as peptides, natural products, glycans, nucleic acids with various high potent bioactivities become important targets for drug discovery programs in the recent years as they could be used for ‘undruggable” intracellular targets. Cell membrane permeability is one of the key properties of pharmacodynamically active MM drug compounds and so evaluating this property for the potential MMs is crucial. Computational prediction for cell membrane permeability of molecules is very challenging; however, recent advancement in the molecular dynamics simulations help to solve this issue partially. It is expected that MMs with high membrane permeability will enable drug discovery research to expand its borders towards intracellular targets. Further to understand the chemistry behind the permeability of MMs, it is necessary to investigate their conformational changes during the permeation through membrane and for that their interactions with the membrane field should be studied reliably because these interactions involve various non-bonding interactions such as hydrogen bonding, -stacking, charge-transfer, polarization dispersion, and non-classical weak hydrogen bonding. Therefore, parameters-based classical mechanics calculations are hardly sufficient to investigate these interactions rather, quantum mechanical (QM) calculations are essential. Fragment molecular orbital (FMO) method could be used for such purpose as it performs ab initio QM calculations by dividing the system into fragments. The present work is aimed to study the cell permeability of middle molecules using molecular dynamics simulations and FMO-QM calculations. For this purpose, a natural compound syringolin and its analogues were considered in this study. Molecular simulations were performed using NAMD and Gromacs programs with CHARMM force field. FMO calculations were performed using the PAICS program at the correlated Resolution-of-Identity second-order Moller Plesset (RI-MP2) level with the cc-pVDZ basis set. The simulations clearly show that while syringolin could not permeate the membrane, its selected analogues go through the medium in nano second scale. These correlates well with the existing experimental evidences that these syringolin analogues are membrane-permeable compounds. Further analyses indicate that intramolecular -stacking interactions in the syringolin analogues influenced their permeability positively. These intramolecular interactions reduce the polarity of these analogues so that they could permeate the lipophilic cell membrane. Conclusively, the cell membrane permeability of various middle molecules with potent bioactivities is efficiently studied using molecular dynamics simulations. Insight of this behavior is thoroughly investigated using FMO-QM calculations. Results obtained in the present study indicate that non-bonding intramolecular interactions such as hydrogen-bonding and -stacking along with the conformational flexibility of MMs are essential for amicable membrane permeation. These results are interesting and are nice example for this theoretical calculation approach that could be used to study the permeability of other middle molecules. This work was supported by Japan Agency for Medical Research and Development (AMED) under Grant Number 18ae0101047.

Keywords: fragment molecular orbital theory, membrane permeability, middle molecules, molecular dynamics simulation

Procedia PDF Downloads 185
18 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator

Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic

Abstract:

The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.

Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion

Procedia PDF Downloads 61
17 The Ductile Fracture of Armor Steel Targets Subjected to Ballistic Impact and Perforation: Calibration of Four Damage Criteria

Authors: Imen Asma Mbarek, Alexis Rusinek, Etienne Petit, Guy Sutter, Gautier List

Abstract:

Over the past two decades, the automotive, aerospace and army industries have been paying an increasing attention to Finite Elements (FE) numerical simulations of the fracture process of their structures. Thanks to the numerical simulations, it is nowadays possible to analyze several problems involving costly and dangerous extreme loadings safely and at a reduced cost such as blast or ballistic impact problems. The present paper is concerned with ballistic impact and perforation problems involving ductile fracture of thin armor steel targets. The target fracture process depends usually on various parameters: the projectile nose shape, the target thickness and its mechanical properties as well as the impact conditions (friction, oblique/normal impact...). In this work, the investigations are concerned with the normal impact of a conical head-shaped projectile on thin armor steel targets. The main aim is to establish a comparative study of four fracture criteria that are commonly used in the fracture process simulations of structures subjected to extreme loadings such as ballistic impact and perforation. Usually, the damage initiation results from a complex physical process that occurs at the micromechanical scale. On a macro scale and according to the following fracture models, the variables on which the fracture depends are mainly the stress triaxiality ƞ, the strain rate, temperature T, and eventually the Lode angle parameter Ɵ. The four failure criteria are: the critical strain to failure model, the Johnson-Cook model, the Wierzbicki model and the Modified Hosford-Coulomb model MHC. Using the SEM, the observations of the fracture facies of tension specimen and of armor steel targets impacted at low and high incident velocities show that the fracture of the specimens is a ductile fracture. The failure mode of the targets is petalling with crack propagation and the fracture facies are covered with micro-cavities. The parameters of each ductile fracture model have been identified for three armor steels and the applicability of each criterion was evaluated using experimental investigations coupled to numerical simulations. Two loading paths were investigated in this study, under a wide range of strain rates. Namely, quasi-static and intermediate uniaxial tension and quasi-static and dynamic double shear testing allow covering various values of stress triaxiality ƞ and of the Lode angle parameter Ɵ. All experiments were conducted on three different armor steel specimen under quasi-static strain rates ranging from 10-4 to 10-1 1/s and at three different temperatures ranging from 297K to 500K, allowing drawing the influence of temperature on the fracture process. Intermediate tension testing was coupled to dynamic double shear experiments conducted on the Hopkinson tube device, allowing to spot the effect of high strain rate on the damage evolution and the crack propagation. The aforementioned fracture criteria are implemented into the FE code ABAQUS via VUMAT subroutine and they were coupled to suitable constitutive relations allow having reliable results of ballistic impact problems simulation. The calibration of the four damage criteria as well as a concise evaluation of the applicability of each criterion are detailed in this work.

Keywords: armor steels, ballistic impact, damage criteria, ductile fracture, SEM

Procedia PDF Downloads 312
16 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design

Authors: Sebastian Kehne, Alexander Epple, Werner Herfs

Abstract:

A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).

Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design

Procedia PDF Downloads 284
15 Structured Cross System Planning and Control in Modular Production Systems by Using Agent-Based Control Loops

Authors: Simon Komesker, Achim Wagner, Martin Ruskowski

Abstract:

In times of volatile markets with fluctuating demand and the uncertainty of global supply chains, flexible production systems are the key to an efficient implementation of a desired production program. In this publication, the authors present a holistic information concept taking into account various influencing factors for operating towards the global optimum. Therefore, a strategy for the implementation of multi-level planning for a flexible, reconfigurable production system with an alternative production concept in the automotive industry is developed. The main contribution of this work is a system structure mixing central and decentral planning and control evaluated in a simulation framework. The information system structure in current production systems in the automotive industry is rigidly hierarchically organized in monolithic systems. The production program is created rule-based with the premise of achieving uniform cycle time. This program then provides the information basis for execution in subsystems at the station and process execution level. In today's era of mixed-(car-)model factories, complex conditions and conflicts arise in achieving logistics, quality, and production goals. There is no provision for feedback loops of results from the process execution level (resources) and process supporting (quality and logistics) systems and reconsideration in the planning systems. To enable a robust production flow, the complexity of production system control is artificially reduced by the line structure and results, for example in material-intensive processes (buffers and safety stocks - two container principle also for different variants). The limited degrees of freedom of line production have produced the principle of progress figure control, which results in one-time sequencing, sequential order release, and relatively inflexible capacity control. As a result, modularly structured production systems such as modular production according to known approaches with more degrees of freedom are currently difficult to represent in terms of information technology. The remedy is an information concept that supports cross-system and cross-level information processing for centralized and decentralized decision-making. Through an architecture of hierarchically organized but decoupled subsystems, the paradigm of hybrid control is used, and a holonic manufacturing system is offered, which enables flexible information provisioning and processing support. In this way, the influences from quality, logistics, and production processes can be linked holistically with the advantages of mixed centralized and decentralized planning and control. Modular production systems also require modularly networked information systems with semi-autonomous optimization for a robust production flow. Dynamic prioritization of different key figures between subsystems should lead the production system to an overall optimum. The tasks and goals of quality, logistics, process, resource, and product areas in a cyber-physical production system are designed as an interconnected multi-agent-system. The result is an alternative system structure that executes centralized process planning and decentralized processing. An agent-based manufacturing control is used to enable different flexibility and reconfigurability states and manufacturing strategies in order to find optimal partial solutions of subsystems, that lead to a near global optimum for hybrid planning. This allows a robust near to plan execution with integrated quality control and intralogistics.

Keywords: holonic manufacturing system, modular production system, planning, and control, system structure

Procedia PDF Downloads 168
14 Understanding the Impact of Spatial Light Distribution on Object Identification in Low Vision: A Pilot Psychophysical Study

Authors: Alexandre Faure, Yoko Mizokami, éRic Dinet

Abstract:

These recent years, the potential of light in assisting visually impaired people in their indoor mobility has been demonstrated by different studies. Implementing smart lighting systems for selective visual enhancement, especially designed for low-vision people, is an approach that breaks with the existing visual aids. The appearance of the surface of an object is significantly influenced by the lighting conditions and the constituent materials of the objects. Appearance of objects may appear to be different from expectation. Therefore, lighting conditions lead to an important part of accurate material recognition. The main objective of this work was to investigate the effect of the spatial distribution of light on object identification in the context of low vision. The purpose was to determine whether and what specific lighting approaches should be preferred for visually impaired people. A psychophysical experiment was designed to study the ability of individuals to identify the smallest cube of a pair under different lighting diffusion conditions. Participants were divided into two distinct groups: a reference group of observers with normal or corrected-to-normal visual acuity and a test group, in which observers were required to wear visual impairment simulation glasses. All participants were presented with pairs of cubes in a "miniature room" and were instructed to estimate the relative size of the two cubes. The miniature room replicates real-life settings, adorned with decorations and separated from external light sources by black curtains. The correlated color temperature was set to 6000 K, and the horizontal illuminance at the object level at approximately 240 lux. The objects presented for comparison consisted of 11 white cubes and 11 black cubes of different sizes manufactured with a 3D printer. Participants were seated 60 cm away from the objects. Two different levels of light diffuseness were implemented. After receiving instructions, participants were asked to judge whether the two presented cubes were the same size or if one was smaller. They provided one of five possible answers: "Left one is smaller," "Left one is smaller but unsure," "Same size," "Right one is smaller," or "Right one is smaller but unsure.". The method of constant stimuli was used, presenting stimulus pairs in a random order to prevent learning and expectation biases. Each pair consisted of a comparison stimulus and a reference cube. A psychometric function was constructed to link stimulus value with the frequency of correct detection, aiming to determine the 50% correct detection threshold. Collected data were analyzed through graphs illustrating participants' responses to stimuli, with accuracy increasing as the size difference between cubes grew. Statistical analyses, including 2-way ANOVA tests, showed that light diffuseness had no significant impact on the difference threshold, whereas object color had a significant influence in low vision scenarios. The first results and trends derived from this pilot experiment clearly and strongly suggest that future investigations could explore extreme diffusion conditions to comprehensively assess the impact of diffusion on object identification. For example, the first findings related to light diffuseness may be attributed to the range of manipulation, emphasizing the need to explore how other lighting-related factors interact with diffuseness.

Keywords: Lighting, Low Vision, Visual Aid, Object Identification, Psychophysical Experiment

Procedia PDF Downloads 63
13 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 223
12 Trajectory Optimization for Autonomous Deep Space Missions

Authors: Anne Schattel, Mitja Echim, Christof Büskens

Abstract:

Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.

Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.

Procedia PDF Downloads 411
11 Nonlinear Homogenized Continuum Approach for Determining Peak Horizontal Floor Acceleration of Old Masonry Buildings

Authors: Andreas Rudisch, Ralf Lampert, Andreas Kolbitsch

Abstract:

It is a well-known fact among the engineering community that earthquakes with comparatively low magnitudes can cause serious damage to nonstructural components (NSCs) of buildings, even when the supporting structure performs relatively well. Past research works focused mainly on NSCs of nuclear power plants and industrial plants. Particular attention should also be given to architectural façade elements of old masonry buildings (e.g. ornamental figures, balustrades, vases), which are very vulnerable under seismic excitation. Large numbers of these historical nonstructural components (HiNSCs) can be found in highly frequented historical city centers and in the event of failure, they pose a significant danger to persons. In order to estimate the vulnerability of acceleration sensitive HiNSCs, the peak horizontal floor acceleration (PHFA) is used. The PHFA depends on the dynamic characteristics of the building, the ground excitation, and induced nonlinearities. Consequently, the PHFA can not be generalized as a simple function of height. In the present research work, an extensive case study was conducted to investigate the influence of induced nonlinearity on the PHFA for old masonry buildings. Probabilistic nonlinear FE time-history analyses considering three different hazard levels were performed. A set of eighteen synthetically generated ground motions was used as input to the structure models. An elastoplastic macro-model (multiPlas) for nonlinear homogenized continuum FE-calculation was calibrated to multiple scales and applied, taking specific failure mechanisms of masonry into account. The macro-model was calibrated according to the results of specific laboratory and cyclic in situ shear tests. The nonlinear macro-model is based on the concept of multi-surface rate-independent plasticity. Material damage or crack formation are detected by reducing the initial strength after failure due to shear or tensile stress. As a result, shear forces can only be transmitted to a limited extent by friction when the cracking begins. The tensile strength is reduced to zero. The first goal of the calibration was the consistency of the load-displacement curves between experiment and simulation. The calibrated macro-model matches well with regard to the initial stiffness and the maximum horizontal load. Another goal was the correct reproduction of the observed crack image and the plastic strain activities. Again the macro-model proved to work well in this case and shows very good correlation. The results of the case study show that there is significant scatter in the absolute distribution of the PHFA between the applied ground excitations. An absolute distribution along the normalized building height was determined in the framework of probability theory. It can be observed that the extent of nonlinear behavior varies for the three hazard levels. Due to the detailed scope of the present research work, a robust comparison with code-recommendations and simplified PHFA distributions are possible. The chosen methodology offers a chance to determine the distribution of PHFA along the building height of old masonry structures. This permits a proper hazard assessment of HiNSCs under seismic loads.

Keywords: nonlinear macro-model, nonstructural components, time-history analysis, unreinforced masonry

Procedia PDF Downloads 167
10 IEEE802.15.4e Based Scheduling Mechanisms and Systems for Industrial Internet of Things

Authors: Ho-Ting Wu, Kai-Wei Ke, Bo-Yu Huang, Liang-Lin Yan, Chun-Ting Lin

Abstract:

With the advances in advanced technology, wireless sensor network (WSN) has become one of the most promising candidates to implement the wireless industrial internet of things (IIOT) architecture. However, the legacy IEEE 802.15.4 based WSN technology such as Zigbee system cannot meet the stringent QoS requirement of low powered, real-time, and highly reliable transmission imposed by the IIOT environment. Recently, the IEEE society developed IEEE 802.15.4e Time Slotted Channel Hopping (TSCH) access mode to serve this purpose. Furthermore, the IETF 6TiSCH working group has proposed standards to integrate IEEE 802.15.4e with IPv6 protocol smoothly to form a complete protocol stack for IIOT. In this work, we develop key network technologies for IEEE 802.15.4e based wireless IIoT architecture, focusing on practical design and system implementation. We realize the OpenWSN-based wireless IIOT system. The system architecture is divided into three main parts: web server, network manager, and sensor nodes. The web server provides user interface, allowing the user to view the status of sensor nodes and instruct sensor nodes to follow commands via user-friendly browser. The network manager is responsible for the establishment, maintenance, and management of scheduling and topology information. It executes centralized scheduling algorithm, sends the scheduling table to each node, as well as manages the sensing tasks of each device. Sensor nodes complete the assigned tasks and sends the sensed data. Furthermore, to prevent scheduling error due to packet loss, a schedule inspection mechanism is implemented to verify the correctness of the schedule table. In addition, when network topology changes, the system will act to generate a new schedule table based on the changed topology for ensuring the proper operation of the system. To enhance the system performance of such system, we further propose dynamic bandwidth allocation and distributed scheduling mechanisms. The developed distributed scheduling mechanism enables each individual sensor node to build, maintain and manage the dedicated link bandwidth with its parent and children nodes based on locally observed information by exchanging the Add/Delete commands via two processes. The first process, termed as the schedule initialization process, allows each sensor node pair to identify the available idle slots to allocate the basic dedicated transmission bandwidth. The second process, termed as the schedule adjustment process, enables each sensor node pair to adjust their allocated bandwidth dynamically according to the measured traffic loading. Such technology can sufficiently satisfy the dynamic bandwidth requirement in the frequently changing environments. Last but not least, we propose a packet retransmission scheme to enhance the system performance of the centralized scheduling algorithm when the packet delivery rate (PDR) is low. We propose a multi-frame retransmission mechanism to allow every single network node to resend each packet for at least the predefined number of times. The multi frame architecture is built according to the number of layers of the network topology. Performance results via simulation reveal that such retransmission scheme is able to provide sufficient high transmission reliability while maintaining low packet transmission latency. Therefore, the QoS requirement of IIoT can be achieved.

Keywords: IEEE 802.15.4e, industrial internet of things (IIOT), scheduling mechanisms, wireless sensor networks (WSN)

Procedia PDF Downloads 160
9 Development of Portable Hybrid Renewable Energy System for Sustainable Electricity Supply to Rural Communities in Nigeria

Authors: Abdulkarim Nasir, Alhassan T. Yahaya, Hauwa T. Abdulkarim, Abdussalam El-Suleiman, Yakubu K. Abubakar

Abstract:

The need for sustainable and reliable electricity supply in rural communities of Nigeria remains a pressing issue, given the country's vast energy deficit and the significant number of inhabitants lacking access to electricity. This research focuses on the development of a portable hybrid renewable energy system designed to provide a sustainable and efficient electricity supply to these underserved regions. The proposed system integrates multiple renewable energy sources, specifically solar and wind, to harness the abundant natural resources available in Nigeria. The design and development process involves the selection and optimization of components such as photovoltaic panels, wind turbines, energy storage units (batteries), and power management systems. These components are chosen based on their suitability for rural environments, cost-effectiveness, and ease of maintenance. The hybrid system is designed to be portable, allowing for easy transportation and deployment in remote locations with limited infrastructure. Key to the system's effectiveness is its hybrid nature, which ensures continuous power supply by compensating for the intermittent nature of individual renewable sources. Solar energy is harnessed during the day, while wind energy is captured whenever wind conditions are favourable, thus ensuring a more stable and reliable energy output. Energy storage units are critical in this setup, storing excess energy generated during peak production times and supplying power during periods of low renewable generation. These studies include assessing the solar irradiance, wind speed patterns, and energy consumption needs of rural communities. The simulation results inform the optimization of the system's design to maximize energy efficiency and reliability. This paper presents the development and evaluation of a 4 kW standalone hybrid system combining wind and solar power. The portable device measures approximately 8 feet 5 inches in width, 8 inches 4 inches in depth, and around 38 feet in height. It includes four solar panels with a capacity of 120 watts each, a 1.5 kW wind turbine, a solar charge controller, remote power storage, batteries, and battery control mechanisms. Designed to operate independently of the grid, this hybrid device offers versatility for use in highways and various other applications. It also presents a summary and characterization of the device, along with photovoltaic data collected in Nigeria during the month of April. The construction plan for the hybrid energy tower is outlined, which involves combining a vertical-axis wind turbine with solar panels to harness both wind and solar energy. Positioned between the roadway divider and automobiles, the tower takes advantage of the air velocity generated by passing vehicles. The solar panels are strategically mounted to deflect air toward the turbine while generating energy. Generators and gear systems attached to the turbine shaft enable power generation, offering a portable solution to energy challenges in Nigerian communities. The study also addresses the economic feasibility of the system, considering the initial investment costs, maintenance, and potential savings from reduced fossil fuel use. A comparative analysis with traditional energy supply methods highlights the long-term benefits and sustainability of the hybrid system.

Keywords: renewable energy, solar panel, wind turbine, hybrid system, generator

Procedia PDF Downloads 40
8 New Hybrid Process for Converting Small Structural Parts from Metal to CFRP

Authors: Yannick Willemin

Abstract:

Carbon fibre-reinforced plastic (CFRP) offers outstanding value. However, like all materials, CFRP also has its challenges. Many forming processes are largely manual and hard to automate, making it challenging to control repeatability and reproducibility (R&R); they generate significant scrap and are too slow for high-series production; fibre costs are relatively high and subject to supply and cost fluctuations; the supply chain is fragmented; many forms of CFRP are not recyclable, and many materials have yet to be fully characterized for accurate simulation; shelf life and outlife limitations add cost; continuous-fibre forms have design limitations; many materials are brittle; and small and/or thick parts are costly to produce and difficult to automate. A majority of small structural parts are metal due to high CFRP fabrication costs for the small-size class. The fact that CFRP manufacturing processes that produce the highest performance parts also tend to be the slowest and least automated is another reason CFRP parts are generally higher in cost than comparably performing metal parts, which are easier to produce. Fortunately, business is in the midst of a major manufacturing evolution—Industry 4.0— one technology seeing rapid growth is additive manufacturing/3D printing, thanks to new processes and materials, plus an ability to harness Industry 4.0 tools. No longer limited to just prototype parts, metal-additive technologies are used to produce tooling and mold components for high-volume manufacturing, and polymer-additive technologies can incorporate fibres to produce true composites and be used to produce end-use parts with high aesthetics, unmatched complexity, mass customization opportunities, and high mechanical performance. A new hybrid manufacturing process combines the best capabilities of additive—high complexity, low energy usage and waste, 100% traceability, faster to market—and post-consolidation—tight tolerances, high R&R, established materials, and supply chains—technologies. The platform was developed by Zürich-based 9T Labs AG and is called Additive Fusion Technology (AFT). It consists of a design software offering the possibility to determine optimal fibre layup, then exports files back to check predicted performance—plus two pieces of equipment: a 3d-printer—which lays up (near)-net-shape preforms using neat thermoplastic filaments and slit, roll-formed unidirectional carbon fibre-reinforced thermoplastic tapes—and a post-consolidation module—which consolidates then shapes preforms into final parts using a compact compression press fitted with a heating unit and matched metal molds. Matrices—currently including PEKK, PEEK, PA12, and PPS, although nearly any high-quality commercial thermoplastic tapes and filaments can be used—are matched between filaments and tapes to assure excellent bonding. Since thermoplastics are used exclusively, larger assemblies can be produced by bonding or welding together smaller components, and end-of-life parts can be recycled. By combining compression molding with 3D printing, higher part quality with very-low voids and excellent surface finish on A and B sides can be produced. Tight tolerances (min. section thickness=1.5mm, min. section height=0.6mm, min. fibre radius=1.5mm) with high R&R can be cost-competitively held in production volumes of 100 to 10,000 parts/year on a single set of machines.

Keywords: additive manufacturing, composites, thermoplastic, hybrid manufacturing

Procedia PDF Downloads 94
7 Prospects of Acellular Organ Scaffolds for Drug Discovery

Authors: Inna Kornienko, Svetlana Guryeva, Natalia Danilova, Elena Petersen

Abstract:

Drug toxicity often goes undetected until clinical trials, the most expensive and dangerous phase of drug development. Both human cell culture and animal studies have limitations that cannot be overcome by improvements in drug testing protocols. Tissue engineering is an emerging alternative approach to creating models of human malignant tumors for experimental oncology, personalized medicine, and drug discovery studies. This new generation of bioengineered tumors provides an opportunity to control and explore the role of every component of the model system including cell populations, supportive scaffolds, and signaling molecules. An area that could greatly benefit from these models is cancer research. Recent advances in tissue engineering demonstrated that decellularized tissue is an excellent scaffold for tissue engineering. Decellularization of donor organs such as heart, liver, and lung can provide an acellular, naturally occurring three-dimensional biologic scaffold material that can then be seeded with selected cell populations. Preliminary studies in animal models have provided encouraging results for the proof of concept. Decellularized Organs preserve organ microenvironment, which is critical for cancer metastasis. Utilizing 3D tumor models results greater proximity of cell culture morphological characteristics in a model to its in vivo counterpart, allows more accurate simulation of the processes within a functioning tumor and its pathogenesis. 3D models allow study of migration processes and cell proliferation with higher reliability as well. Moreover, cancer cells in a 3D model bear closer resemblance to living conditions in terms of gene expression, cell surface receptor expression, and signaling. 2D cell monolayers do not provide the geometrical and mechanical cues of tissues in vivo and are, therefore, not suitable to accurately predict the responses of living organisms. 3D models can provide several levels of complexity from simple monocultures of cancer cell lines in liquid environment comprised of oxygen and nutrient gradients and cell-cell interaction to more advanced models, which include co-culturing with other cell types, such as endothelial and immune cells. Following this reasoning, spheroids cultivated from one or multiple patient-derived cell lines can be utilized to seed the matrix rather than monolayer cells. This approach furthers the progress towards personalized medicine. As an initial step to create a new ex vivo tissue engineered model of a cancer tumor, optimized protocols have been designed to obtain organ-specific acellular matrices and evaluate their potential as tissue engineered scaffolds for cultures of normal and tumor cells. Decellularized biomatrix was prepared from animals’ kidneys, urethra, lungs, heart, and liver by two decellularization methods: perfusion in a bioreactor system and immersion-agitation on an orbital shaker with the use of various detergents (SDS, Triton X-100) in different concentrations and freezing. Acellular scaffolds and tissue engineered constructs have been characterized and compared using morphological methods. Models using decellularized matrix have certain advantages, such as maintaining native extracellular matrix properties and biomimetic microenvironment for cancer cells; compatibility with multiple cell types for cell culture and drug screening; utilization to culture patient-derived cells in vitro to evaluate different anticancer therapeutics for developing personalized medicines.

Keywords: 3D models, decellularization, drug discovery, drug toxicity, scaffolds, spheroids, tissue engineering

Procedia PDF Downloads 299
6 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop

Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen

Abstract:

Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.

Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.

Procedia PDF Downloads 40
5 Designing and Simulation of the Rotor and Hub of the Unmanned Helicopter

Authors: Zbigniew Czyz, Ksenia Siadkowska, Krzysztof Skiba, Karol Scislowski

Abstract:

Today’s progress in the rotorcraft is mostly associated with an optimization of aircraft performance achieved by active and passive modifications of main rotor assemblies and a tail propeller. The key task is to improve their performance, improve the hover quality factor for rotors but not change in specific fuel consumption. One of the tasks to improve the helicopter is an active optimization of the main rotor providing for flight stages, i.e., an ascend, flight, a descend. An active interference with the airflow around the rotor blade section can significantly change characteristics of the aerodynamic airfoil. The efficiency of actuator systems modifying aerodynamic coefficients in the current solutions is relatively high and significantly affects the increase in strength. The solution to actively change aerodynamic characteristics assumes a periodic change of geometric features of blades depending on flight stages. Changing geometric parameters of blade warping enables an optimization of main rotor performance depending on helicopter flight stages. Structurally, an adaptation of shape memory alloys does not significantly affect rotor blade fatigue strength, which contributes to reduce costs associated with an adaptation of the system to the existing blades, and gains from a better performance can easily amortize such a modification and improve profitability of such a structure. In order to obtain quantitative and qualitative data to solve this research problem, a number of numerical analyses have been necessary. The main problem is a selection of design parameters of the main rotor and a preliminary optimization of its performance to improve the hover quality factor for rotors. This design concept assumes a three-bladed main rotor with a chord of 0.07 m and radius R = 1 m. The value of rotor speed is a calculated parameter of an optimization function. To specify the initial distribution of geometric warping, a special software has been created that uses a numerical method of a blade element which respects dynamic design features such as fluctuations of a blade in its joints. A number of performance analyses as a function of rotor speed, forward speed, and altitude have been performed. The calculations were carried out for the full model assembly. This approach makes it possible to observe the behavior of components and their mutual interaction resulting from the forces. The key element of each rotor is the shaft, hub and pins holding the joints and blade yokes. These components are exposed to the highest loads. As a result of the analysis, the safety factor was determined at the level of k > 1.5, which gives grounds to obtain certification for the strength of the structure. The construction of the joint rotor has numerous moving elements in its structure. Despite the high safety factor, the places with the highest stresses, where the signs of wear and tear may appear, have been indicated. The numerical analysis carried out showed that the most loaded element is the pin connecting the modular bearing of the blade yoke with the element of the horizontal oscillation joint. The stresses in this element result in a safety factor of k=1.7. The other analysed rotor components have a safety factor of more than 2 and in the case of the shaft, this factor is more than 3. However, it must be remembered that the structure is as strong as the weakest cell is. Designed rotor for unmanned aerial vehicles adapted to work with blades with intelligent materials in its structure meets the requirements for certification testing. Acknowledgement: This work has been financed by the Polish National Centre for Research and Development under the LIDER program, Grant Agreement No. LIDER/45/0177/L-9/17/NCBR/2018.

Keywords: main rotor, rotorcraft aerodynamics, shape memory alloy, materials, unmanned helicopter

Procedia PDF Downloads 156
4 Developing a Framework for Sustainable Social Housing Delivery in Greater Port Harcourt City Rivers State, Nigeria

Authors: Enwin Anthony Dornubari, Visigah Kpobari Peter

Abstract:

This research has developed a framework for the provision of sustainable and affordable housing to accommodate the low-income population of Greater Port Harcourt City. The objectives of this study among others, were to: examine UN-Habitat guidelines for acceptable and sustainable social housing provision, describe past efforts of the Rivers State Government and the Federal Government of Nigeria to provide housing for the poor in the Greater Port Harcourt City area; obtain a profile of prospective beneficiaries of the social housing proposed by this research as well as perceptions of their present living conditions, and living in the proposed self-sustaining social housing development, based on the initial simulation of the proposal; describe the nature of the framework, guideline and management of the proposed social housing development and explain the modalities for its implementation. The study utilized the mixed methods research approach, aimed at triangulating findings from the quantitative and qualitative paradigms. Opinions of professional of the built environment; Director, Development Control, Greater Port Harcourt City Development Authority; Directors of Ministry of Urban Development and Physical Planning; Housing and Property Development Authority and managers of selected Primary Mortgage Institutions were sought and analyzed. There were four target populations for the study, namely: members of occupational sub-groups for FGDs (Focused Group Discussions); development professionals for KIIs (Key Informant Interviews), household heads in selected communities of GPHC; and relevant public officials for IDI (Individual Depth Interview). Focus Group Discussions (FGDs) were held with members of occupational sub-groups in each of the eight selected communities (Fisherfolk). The table shows that there were forty (40) members across all occupational sub-groups in each selected community, yielding a total of 320 in the eight (8) communities of Mgbundukwu (Mile 2 Diobu), Rumuodomaya, Abara (Etche), Igwuruta-Ali(Ikwerre), Wakama(Ogu-Bolo), Okujagu (Okrika), Akpajo (Eleme), and Okoloma (Oyigbo). For key informant interviews, two (2) members were judgmentally selected from each of the following development professions: urban and regional planners; architects; estate surveyors; land surveyors; quantity surveyors; and engineers. Concerning Population 3-Household Heads in Selected Communities of GPHC, a stratified multi-stage sampling procedure was adopted: Stage 1-Obtaining a 10% (a priori decision) sample of the component communities of GPHC in each stratum. The number in each stratum was rounded to one whole number to ensure representation of each stratum. Stage 2-Obtaining the number of households to be studied after applying the Taro Yamane formula, which aided in determining the appropriate number of cases to be studied at the precision level of 5%. Findings revealed, amongst others, that poor implementation of the UN-Habitat global shelter strategy, lack of stakeholder engagement, inappropriate locations, undue bureaucracy, lack of housing fairness and equity and high cost of land and building materials were the reasons for the failure of past efforts towards social housing provision in the Greater Port Harcourt City area. The study recommended a public-private partnership approach for the implementation and management of the framework. It also recommended a robust and sustained relationship between the management of the framework and the UN-Habitat office and other relevant government agencies responsible for housing development and all investment partners to create trust and efficiency.

Keywords: development, framework, low-income, sustainable, social housing

Procedia PDF Downloads 248
3 Computational Fluid Dynamics Simulation of a Nanofluid-Based Annular Solar Collector with Different Metallic Nano-Particles

Authors: Sireetorn Kuharat, Anwar Beg

Abstract:

Motivation- Solar energy constitutes the most promising renewable energy source on earth. Nanofluids are a very successful family of engineered fluids, which contain well-dispersed nanoparticles suspended in a stable base fluid. The presence of metallic nanoparticles (e.g. gold, silver, copper, aluminum etc) significantly improves the thermo-physical properties of the host fluid and generally results in a considerable boost in thermal conductivity, density, and viscosity of nanofluid compared with the original base (host) fluid. This modification in fundamental thermal properties has profound implications in influencing the convective heat transfer process in solar collectors. The potential for improving solar collector direct absorber efficiency is immense and to gain a deeper insight into the impact of different metallic nanoparticles on efficiency and temperature enhancement, in the present work, we describe recent computational fluid dynamics simulations of an annular solar collector system. The present work studies several different metallic nano-particles and compares their performance. Methodologies- A numerical study of convective heat transfer in an annular pipe solar collector system is conducted. The inner tube contains pure water and the annular region contains nanofluid. Three-dimensional steady-state incompressible laminar flow comprising water- (and other) based nanofluid containing a variety of metallic nanoparticles (copper oxide, aluminum oxide, and titanium oxide nanoparticles) is examined. The Tiwari-Das model is deployed for which thermal conductivity, specific heat capacity and viscosity of the nanofluid suspensions is evaluated as a function of solid nano-particle volume fraction. Radiative heat transfer is also incorporated using the ANSYS solar flux and Rosseland radiative models. The ANSYS FLUENT finite volume code (version 18.1) is employed to simulate the thermo-fluid characteristics via the SIMPLE algorithm. Mesh-independence tests are conducted. Validation of the simulations is also performed with a computational Harlow-Welch MAC (Marker and Cell) finite difference method and excellent correlation achieved. The influence of volume fraction on temperature, velocity, pressure contours is computed and visualized. Main findings- The best overall performance is achieved with copper oxide nanoparticles. Thermal enhancement is generally maximized when water is utilized as the base fluid, although in certain cases ethylene glycol also performs very efficiently. Increasing nanoparticle solid volume fraction elevates temperatures although the effects are less prominent in aluminum and titanium oxide nanofluids. Significant improvement in temperature distributions is achieved with copper oxide nanofluid and this is attributed to the superior thermal conductivity of copper compared to other metallic nano-particles studied. Important fluid dynamic characteristics are also visualized including circulation and temperature shoots near the upper region of the annulus. Radiative flux is observed to enhance temperatures significantly via energization of the nanofluid although again the best elevation in performance is attained consistently with copper oxide. Conclusions-The current study generalizes previous investigations by considering multiple metallic nano-particles and furthermore provides a good benchmark against which to calibrate experimental tests on a new solar collector configuration currently being designed at Salford University. Important insights into the thermal conductivity and viscosity with metallic nano-particles is also provided in detail. The analysis is also extendable to other metallic nano-particles including gold and zinc.

Keywords: heat transfer, annular nanofluid solar collector, ANSYS FLUENT, metallic nanoparticles

Procedia PDF Downloads 142
2 Gamification Beyond Competition: the Case of DPG Lab Collaborative Learning Program for High-School Girls by GameLab KBTU and UNICEF in Kazakhstan

Authors: Nazym Zhumabayeva, Aleksandr Mezin, Alexandra Knysheva

Abstract:

Women's underrepresentation in STEM is critical, worsened by ineffective engagement in educational practices. UNICEF Kazakhstan and GameLab KBTU's collaborative initiatives aim to enhance female STEM participation by fostering an inclusive environment. Learning from LEVEL UP's 2023 program, which featured a hackathon, the 2024 strategy pivots towards non-competitive gamification. Although the data from last year's project showed higher than average student engagement, observations and in-depth interviews with participants showed that the format was stressful for the girls, making them focus on points rather than on other values. This study presents a gamified educational system, DPG Lab, aimed at incentivizing young women's participation in STEM through the development of digital public goods (DPGs). By prioritizing collaborative gamification elements, the project seeks to create an inclusive learning environment that increases engagement and interest in STEM among young women. The DPG Lab aims to find a solution to minimize competition and support collaboration. The project is designed to motivate female participants towards the development of digital solutions through an introduction to the concept of DPGs. It consists of a short online course, a simulation videogame, and a real-time online quest with an offline finale at the KBTU campus. The online course offers short video lectures on open-source development and DPG standards. The game facilitates the practical application of theoretical knowledge, enriching the learning experience. Learners can also participate in a quest that encourages participants to develop DPG ideas in teams by choosing missions throughout the quest path. At the offline quest finale, the participants will meet in person to exchange experiences and accomplishments without engaging in comparative assessments: the quest ensures that each team’s trajectory is distinct by design. This marks a shift from competitive hackathons to a collaborative format, recognizing the unique contributions and achievements of each participant. The pilot batch of students is scheduled to commence in April 2024, with the finale anticipated in June. It is projected that this group will comprise 50 female high-school students from various regions across Kazakhstan. Expected outcomes include increased engagement and interest in STEM fields among young female participants, positive emotional and psychological impact through an emphasis on collaborative learning environments, and improved understanding and skills in DPG development. GameLab KBTU intends to undertake a hypothesis evaluation, employing a methodology similar to that utilized in the preceding LEVEL UP project. This approach will encompass the compilation of quantitative metrics (conversion funnels, test results, and surveys) and qualitative data from in-depth interviews and observational studies. For comparative analysis, a select group of participants from the previous year's project will be recruited to engage in the DPG Lab. By developing and implementing a gamified framework that emphasizes inclusion, engagement, and collaboration, the study seeks to provide practical knowledge about effective gamification strategies for promoting gender diversity in STEM. The expected outcomes of this initiative can contribute to the broader discussion on gamification in education and gender equality in STEM by offering a replicable and scalable model for similar interventions around the world.

Keywords: collaborative learning, competitive learning, digital public goods, educational gamification, emerging regions, STEM, underprivileged groups

Procedia PDF Downloads 60
1 Characterization of Aluminosilicates and Verification of Their Impact on Quality of Ceramic Proppants Intended for Shale Gas Output

Authors: Joanna Szymanska, Paulina Wawulska-Marek, Jaroslaw Mizera

Abstract:

Nowadays, the rapid growth of global energy consumption and uncontrolled depletion of natural resources become a serious problem. Shale rocks are the largest and potential global basins containing hydrocarbons, trapped in closed pores of the shale matrix. Regardless of the shales origin, mining conditions are extremely unfavourable due to high reservoir pressure, great depths, increased clay minerals content and limited permeability (nanoDarcy) of the rocks. Taking into consideration such geomechanical barriers, effective extraction of natural gas from shales with plastic zones demands effective operations. Actually, hydraulic fracturing is the most developed technique based on the injection of pressurized fluid into a wellbore, to initiate fractures propagation. However, a rapid drop of pressure after fluid suction to the ground induces a fracture closure and conductivity reduction. In order to minimize this risk, proppants should be applied. They are solid granules transported with hydraulic fluids to locate inside the rock. Proppants act as a prop for the closing fracture, thus gas migration to a borehole is effective. Quartz sands are commonly applied proppants only at shallow deposits (USA). Whereas, ceramic proppants are designed to meet rigorous downhole conditions to intensify output. Ceramic granules predominate with higher mechanical strength, stability in strong acidic environment, spherical shape and homogeneity as well. Quality of ceramic proppants is conditioned by raw materials selection. Aim of this study was to obtain the proppants from aluminosilicates (the kaolinite subgroup) and mix of minerals with a high alumina content. These loamy minerals contain a tubular and platy morphology that improves mechanical properties and reduces their specific weight. Moreover, they are distinguished by well-developed surface area, high porosity, fine particle size, superb dispersion and nontoxic properties - very crucial for particles consolidation into spherical and crush-resistant granules in mechanical granulation process. The aluminosilicates were mixed with water and natural organic binder to improve liquid-bridges and pores formation between particles. Afterward, the green proppants were subjected to sintering at high temperatures. Evaluation of the minerals utility was based on their particle size distribution (laser diffraction study) and thermal stability (thermogravimetry). Scanning Electron Microscopy was useful for morphology and shape identification combined with specific surface area measurement (BET). Chemical composition was verified by Energy Dispersive Spectroscopy and X-ray Fluorescence. Moreover, bulk density and specific weight were measured. Such comprehensive characterization of loamy materials confirmed their favourable impact on the proppants granulation. The sintered granules were analyzed by SEM to verify the surface topography and phase transitions after sintering. Pores distribution was identified by X-Ray Tomography. This method enabled also the simulation of proppants settlement in a fracture, while measurement of bulk density was essential to predict their amount to fill a well. Roundness coefficient was also evaluated, whereas impact on mining environment was identified by turbidity and solubility in acid - to indicate risk of the material decay in a well. The obtained outcomes confirmed a positive influence of the loamy minerals on ceramic proppants properties with respect to the strict norms. This research is perspective for higher quality proppants production with costs reduction.

Keywords: aluminosilicates, ceramic proppants, mechanical granulation, shale gas

Procedia PDF Downloads 161