Search results for: reliability modeling
488 The Influence of Infiltration and Exfiltration Processes on Maximum Wave Run-Up: A Field Study on Trinidad Beaches
Authors: Shani Brathwaite, Deborah Villarroel-Lamb
Abstract:
Wave run-up may be defined as the time-varying position of the landward extent of the water’s edge, measured vertically from the mean water level position. The hydrodynamics of the swash zone and the accurate prediction of maximum wave run-up, play a critical role in the study of coastal engineering. The understanding of these processes is necessary for the modeling of sediment transport, beach recovery and the design and maintenance of coastal engineering structures. However, due to the complex nature of the swash zone, there remains a lack of detailed knowledge in this area. Particularly, there has been found to be insufficient consideration of bed porosity and ultimately infiltration/exfiltration processes, in the development of wave run-up models. Theoretically, there should be an inverse relationship between maximum wave run-up and beach porosity. The greater the rate of infiltration during an event, associated with a larger bed porosity, the lower the magnitude of the maximum wave run-up. Additionally, most models have been developed using data collected on North American or Australian beaches and may have limitations when used for operational forecasting in Trinidad. This paper aims to assess the influence and significance of infiltration and exfiltration processes on wave run-up magnitudes within the swash zone. It also seeks to pay particular attention to how well various empirical formulae can predict maximum run-up on contrasting beaches in Trinidad. Traditional surveying techniques will be used to collect wave run-up and cross-sectional data on various beaches. Wave data from wave gauges and wave models will be used as well as porosity measurements collected using a double ring infiltrometer. The relationship between maximum wave run-up and differing physical parameters will be investigated using correlation analyses. These physical parameters comprise wave and beach characteristics such as wave height, wave direction, period, beach slope, the magnitude of wave setup, and beach porosity. Most parameterizations to determine the maximum wave run-up are described using differing parameters and do not always have a good predictive capability. This study seeks to improve the formulation of wave run-up by using the aforementioned parameters to generate a formulation with a special focus on the influence of infiltration/exfiltration processes. This will further contribute to the improvement of the prediction of sediment transport, beach recovery and design of coastal engineering structures in Trinidad.Keywords: beach porosity, empirical models, infiltration, swash, wave run-up
Procedia PDF Downloads 357487 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics
Authors: Zahid Ullah, Atlas Khan
Abstract:
The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications
Procedia PDF Downloads 93486 The Perceptions of Parents Regarding the Appropriateness of the Early Childhood Financial Literacy Program for Children 3 to 6 Years of Age Presented at an Early Childhood Facility in South Africa: A Case Study
Authors: M. Naude, R. Joubert, A. du Plessis, S. Pelser, M. Trollip
Abstract:
Context: The study focuses on the perceptions of South African parents and teachers regarding a play-based financial literacy program for children aged 3 to 6 years at an early childhood facility. It emphasizes the importance of early interventions in financial education to reduce poverty and inequality. Research Aim: To explore how parental involvement in teaching money management concepts to young children can support financial literacy education both at school and at home. Methodology: A qualitative deductive case study was conducted at a South African early childhood facility involving 90 children, their teachers and their families. Thematic content analysis of online survey responses and focus group discussions with teachers were used to identify patterns and themes related to participants’ perceptions of the financial literacy program. Validity: The study's validity and reproducibility are ensured by the depth and honesty of the data, participant involvement, and the inquirer's objectivity. Reliability aligns with the interpretive paradigm of this study, while transparency in data gathering and analysis enhances its trustworthiness. Credibility is further supported by using two triangulation methods: focus group interviews with teachers and open-ended questionnaires from parents. Findings: Parents reported overall satisfaction with the program and highlighted the development of essential money management skills in their children. They emphasized the collaborative role of home and school environments in fostering financial literacy in early childhood. Teachers reported that communication and interaction with the parents increased and grew. Healthy and positive relationships were established between the teachers and the parents which contributed to the success of the classroom financial literacy program. Theoretical Importance: The study underscores the significance of play-based financial literacy education in early childhood and the critical role of parental involvement in reinforcing money management concepts. It contributes to laying a solid foundation for children's future financial well-being. Data Collection: Data was collected through an online survey administered to parents of children participating in the financial literacy program over a period of 10 weeks. Focus group discussions were utilized with the teachers of each class after the conclusion of the program. Analysis Procedures: Thematic content analysis was applied to the survey responses to identify patterns, themes, and insights related to the participants’ perceptions of the program's effectiveness in teaching money management concepts to young children. Question Addressed: How does parental involvement in teaching money management concepts to young children support financial literacy education in early childhood? Conclusion: The study highlights the positive impact of a play-based financial literacy program for children aged 3 to 6 years and underscores the importance of collaboration between home and school environments in fostering financial literacy skills.Keywords: early childhood, financial literacy, money management, parent involvement, play-based learning, South Africa
Procedia PDF Downloads 14485 Cleaning of Polycyclic Aromatic Hydrocarbons (PAH) Obtained from Ferroalloys Plant
Authors: Stefan Andersson, Balram Panjwani, Bernd Wittgens, Jan Erik Olsen
Abstract:
Polycyclic Aromatic hydrocarbons are organic compounds consisting of only hydrogen and carbon aromatic rings. PAH are neutral, non-polar molecules that are produced due to incomplete combustion of organic matter. These compounds are carcinogenic and interact with biological nucleophiles to inhibit the normal metabolic functions of the cells. Norways, the most important sources of PAH pollution is considered to be aluminum plants, the metallurgical industry, offshore oil activity, transport, and wood burning. Stricter governmental regulations regarding emissions to the outer and internal environment combined with increased awareness of the potential health effects have motivated Norwegian metal industries to increase their efforts to reduce emissions considerably. One of the objective of the ongoing industry and Norwegian research council supported "SCORE" project is to reduce potential PAH emissions from an off gas stream of a ferroalloy furnace through controlled combustion. In a dedicated combustion chamber. The sizing and configuration of the combustion chamber depends on the combined properties of the bulk gas stream and the properties of the PAH itself. In order to achieve efficient and complete combustion the residence time and minimum temperature need to be optimized. For this design approach reliable kinetic data of the individual PAH-species and/or groups thereof are necessary. However, kinetic data on the combustion of PAH are difficult to obtain and there is only a limited number of studies. The paper presents an evaluation of the kinetic data for some of the PAH obtained from literature. In the present study, the oxidation is modelled for pure PAH and also for PAH mixed with process gas. Using a perfectly stirred reactor modelling approach the oxidation is modelled including advanced reaction kinetics to study influence of residence time and temperature on the conversion of PAH to CO2 and water. A Chemical Reactor Network (CRN) approach is developed to understand the oxidation of PAH inside the combustion chamber. Chemical reactor network modeling has been found to be a valuable tool in the evaluation of oxidation behavior of PAH under various conditions.Keywords: PAH, PSR, energy recovery, ferro alloy furnace
Procedia PDF Downloads 273484 The Use of Solar Energy for Cold Production
Authors: Nadia Allouache, Mohamed Belmedani
Abstract:
—It is imperative today to further explore alternatives to fossil fuels by promoting in particular renewable sources such as solar energy to produce cold. It is also important to carefully examine its current state as well as its future prospects in order to identify the best conditions to support its optimal development. Technologies linked to this alternative source fascinate their users because they seem magical in their ability to directly transform solar energy into cooling without resorting to polluting fuels such as those derived from hydrocarbons or other toxic substances. In addition, these not only allow significant savings in electricity, but can also help reduce the costs of electrical energy production when applied on a large scale. In this context, our study aims to analyze the performance of solar adsorption cooling systems by selecting the appropriate pair Adsorbent/Adsorbat. This paper presents a model describing the heat and mass transfer in tubular finned adsorber of solar adsorption refrigerating machine. The modelisation of the solar reactor take into account the heat and mass transfers phenomena. The reactor pressure is assumed to be uniform, the reactive reactor is characterized by an equivalent thermal conductivity and assumed to be at chemical and thermodynamic equilibrium. The numerical model is controlled by heat, mass and sorption equilibrium equations. Under the action of solar radiation, the mixture of adsorbent–adsorbate has a transitory behavior. Effect of key parameters on the adsorbed quantity and on the thermal and solar performances are analyzed and discussed. The results show that, The performances of the system that depends on the incident global irradiance during a whole day depends on the weather conditions. For the used working pairs, the increase of the fins number corresponds to the decreasing of the heat losses towards environmental and the increasing of heat transfer inside the adsorber. The system performances are sensitive to the evaporator and condenser temperatures. For the considered data measured for clear type days of may and july 2023 in Algeria and Tunisia, the performances of the cooling system are very significant in Algeria compared to Tunisia.Keywords: adsorption, adsorbent-adsorbate pair, finned reactor, numerical modeling, solar energy
Procedia PDF Downloads 18483 Define Immersive Need Level for Optimal Adoption of Virtual Words with BIM Methodology
Authors: Simone Balin, Cecilia M. Bolognesi, Paolo Borin
Abstract:
In the construction industry, there is a large amount of data and interconnected information. To manage this information effectively, a transition to the immersive digitization of information processes is required. This transition is important to improve knowledge circulation, product quality, production sustainability and user satisfaction. However, there is currently a lack of a common definition of immersion in the construction industry, leading to misunderstandings and limiting the use of advanced immersive technologies. Furthermore, the lack of guidelines and a common vocabulary causes interested actors to abandon the virtual world after the first collaborative steps. This research aims to define the optimal use of immersive technologies in the AEC sector, particularly for collaborative processes based on the BIM methodology. Additionally, the research focuses on creating classes and levels to structure and define guidelines and a vocabulary for the use of the " Immersive Need Level." This concept, matured by recent technological advancements, aims to enable a broader application of state-of-the-art immersive technologies, avoiding misunderstandings, redundancies, or paradoxes. While the concept of "Informational Need Level" has been well clarified with the recent UNI EN 17412-1:2021 standard, when it comes to immersion, current regulations and literature only provide some hints about the technology and related equipment, leaving the procedural approach and the user's free interpretation completely unexplored. Therefore, once the necessary knowledge and information are acquired (Informational Need Level), it is possible to transition to an Immersive Need Level that involves the practical application of the acquired knowledge, exploring scenarios and solutions in a more thorough and detailed manner, with user involvement, via different immersion scales, in the design, construction or management process of a building or infrastructure. The need for information constitutes the basis for acquiring relevant knowledge and information, while the immersive need can manifest itself later, once a solid information base has been solidified, using the senses and developing immersive awareness. This new approach could solve the problem of inertia among AEC industry players in adopting and experimenting with new immersive technologies, expanding collaborative iterations and the range of available options.Keywords: AECindustry, immersive technology (IMT), virtual reality, augmented reality, building information modeling (BIM), decision making, collaborative process, information need level, immersive level of need
Procedia PDF Downloads 99482 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data
Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann
Abstract:
Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers
Procedia PDF Downloads 205481 Relative Importance of Different Mitochondrial Components in Maintaining the Barrier Integrity of Retinal Endothelial Cells: Implications for Vascular-associated Retinal Diseases
Authors: Shaimaa Eltanani, Thangal Yumnamcha, Ahmed S. Ibrahim
Abstract:
Purpose: Mitochondria dysfunction is central to breaking the barrier integrity of retinal endothelial cells (RECs) in various blinding eye diseases such as diabetic retinopathy and retinopathy of prematurity. Therefore, we aimed to dissect the role of different mitochondrial components, specifically, those of oxidative phosphorylation (OxPhos), in maintaining the barrier function of RECs. Methods: Electric cell-substrate impedance sensing (ECIS) technology was used to assess in real-time the role of different mitochondrial components in the total impedance (Z) of human RECs (HRECs) and its components; the capacitance (C) and the total resistance (R). HRECs were treated with specific mitochondrial inhibitors that target different steps in OxPhos: Rotenone for complex I; Oligomycin for ATP synthase; and FCCP for uncoupling OxPhos. Furthermore, data were modeled to investigate the effects of these inhibitors on the three parameters that govern the total resistance of cells: cell-cell interactions (Rb), cell-matrix interactions (α), and cell membrane permeability (Cm). Results: Rotenone (1 µM) produced the greatest reduction in the Z, followed by FCCP (1 µM), whereas no reduction in the Z was observed after the treatment with Oligomycin (1 µM). Following this further, we deconvoluted the effect of these inhibitors on Rb, α, and Cm. Firstly, rotenone (1 µM) completely abolished the resistance contribution of Rb, as the Rb became zero immediately after the treatment. Secondly, FCCP (1 µM) eliminated the resistance contribution of Rb only after 2.5 hours and increased Cm without considerable effect on α. Lastly, Oligomycin had the lowest impact among these inhibitors on Rb, which became similar to the control group at the end of the experiment without noticeable effects on Cm or α. Conclusion: These results demonstrate differential roles for complex I, complex V, and coupling of OxPhos in maintaining the barrier functionality of HRECs, in which complex I being the most important component in regulating the barrier functionality and the spreading behavior of HRECs. Such differences can be used in investigating gene expression as well as for screening selective agents that improve the functionality of complex I to be used in the therapeutic approach for treating REC-related retinal diseases.Keywords: human retinal endothelial cells (hrecs), rotenone, oligomycin, fccp, oxidative phosphorylation, oxphos, capacitance, impedance, ecis modeling, rb resistance, α resistance, and barrier integrity
Procedia PDF Downloads 100480 The Relationship between Proximity to Sources of Industrial-Related Outdoor Air Pollution and Children Emergency Department Visits for Asthma in the Census Metropolitan Area of Edmonton, Canada, 2004/2005 to 2009/2010
Authors: Laura A. Rodriguez-Villamizar, Alvaro Osornio-Vargas, Brian H. Rowe, Rhonda J. Rosychuk
Abstract:
Introduction/Objectives: The Census Metropolitan Area of Edmonton (CMAE) has important industrial emissions to the air from the Industrial Heartland Alberta (IHA) at the Northeast and the coal-fired power plants (CFPP) at the West. The objective of the study was to explore the presence of clusters of children asthma ED visits in the areas around the IHA and the CFPP. Methods: Retrospective data on children asthma ED visits was collected at the dissemination area (DA) level for children between 2 and 14 years of age, living in the CMAE between April 1, 2004, and March 31, 2010. We conducted a spatial analysis of disease clusters around putative sources with count (ecological) data using descriptive, hypothesis testing, and multivariable modeling analysis. Results: The mean crude rate of asthma ED visits was 9.3/1,000 children population per year during the study period. Circular spatial scan test for cases and events identified a cluster of children asthma ED visits in the DA where the CFPP are located in the Wabamum area. No clusters were identified around the IHA area. The multivariable models suggest that there is a significant decline in risk for children asthma ED visits as distance increases around the CFPP area this effect is modified at the SE direction with mean angle 125.58 degrees, where the risk increases with distance. In contrast, the regression models for IHA suggest that there is a significant increase in risk for children asthma ED visits as distance increases around the IHA area and this effect is modified at SW direction with mean angle 216.52 degrees, where the risk increases at shorter distances. Conclusions: Different methods for detecting clusters of disease consistently suggested the existence of a cluster of children asthma ED visits around the CFPP but not around the IHA within the CMAE. These results are probably explained by the direction of the air pollutants dispersion caused by the predominant and subdominant wind direction at each point. The use of different approaches to detect clusters of disease is valuable to have a better understanding of the presence, shape, direction and size of clusters of disease around pollution sources.Keywords: air pollution, asthma, disease cluster, industry
Procedia PDF Downloads 282479 Challenging Convections: Rethinking Literature Review Beyond Citations
Authors: Hassan Younis
Abstract:
Purpose: The objective of this study is to review influential papers in the sustainability and supply chain studies domain, leveraging insights from this review to develop a structured framework for academics and researchers. This framework aims to assist scholars in identifying the most impactful publications for their scholarly pursuits. Subsequently, the study will apply and trial the developed framework on selected scholarly articles within the sustainability and supply chain studies domain to evaluate its efficacy, practicality, and reliability. Design/Methodology/Approach: Utilizing the "Publish or Perish" tool, a search was conducted to locate papers incorporating "sustainability" and "supply chain" in their titles. After rigorous filtering steps, a panel of university professors identified five crucial criteria for evaluating research robustness: average yearly citation counts (25%), scholarly contribution (25%), alignment of findings with objectives (15%), methodological rigor (20%), and journal impact factor (15%). These five evaluation criteria are abbreviated as “ACMAJ" framework. Each paper then received a tiered score (1-3) for each criterion, normalized within its category, and summed using weighted averages to calculate a Final Normalized Score (FNS). This systematic approach allows for objective comparison and ranking of the research based on its impact, novelty, rigor, and publication venue. Findings: The study's findings highlight the lack of structured frameworks for assessing influential sustainability research in supply chain management, which often results in a dependence on citation counts. A complete model that incorporates five essential criteria has been suggested as a response. By conducting a methodical trial on specific academic articles in the field of sustainability and supply chain studies, the model demonstrated its effectiveness as a tool for identifying and selecting influential research papers that warrant additional attention. This work aims to fill a significant deficiency in existing techniques by providing a more comprehensive approach to identifying and ranking influential papers in the field. Practical Implications: The developed framework helps scholars identify the most influential sustainability and supply chain publications. Its validation serves the academic community by offering a credible tool and helping researchers, students, and practitioners find and choose influential papers. This approach aids field literature reviews and study suggestions. Analysis of major trends and topics deepens our grasp of this critical study area's changing terrain. Originality/Value: The framework stands as a unique contribution to academia, offering scholars an important and new tool to identify and validate influential publications. Its distinctive capacity to efficiently guide scholars, learners, and professionals in selecting noteworthy publications, coupled with the examination of key patterns and themes, adds depth to our understanding of the evolving landscape in this critical field of study.Keywords: supply chain management, sustainability, framework, model
Procedia PDF Downloads 52478 Numerical Validation of Liquid Nitrogen Phase Change in a Star-Shaped Ambient Vaporizer
Authors: Yusuf Yilmaz, Gamze Gediz Ilis
Abstract:
Gas Nitrogen where has a boiling point of -189.52oC at atmospheric pressure widely used in the industry. Nitrogen that used in the industry should be transported in liquid form to the plant area. Ambient air vaporizer (AAV) generally used for vaporization of cryogenic gases such as liquid nitrogen (LN2), liquid oxygen (LOX), liquid natural gas (LNG), and liquid argon (LAR) etc. AAV is a group of star-shaped fin vaporizer. The design and the effect of the shape of fins of the vaporizer is one of the most important criteria for the performance of the vaporizer. In this study, the performance of AAV working with liquid nitrogen was analyzed numerically in a star-shaped aluminum finned pipe. The numerical analysis is performed in order to investigate the heat capacity of the vaporizer per meter pipe length. By this way, the vaporizer capacity can be predicted for the industrial applications. In order to achieve the validation of the numerical solution, the experimental setup is constructed. The setup includes a liquid nitrogen tank with a pressure of 9 bar. The star-shaped aluminum finned tube vaporizer is connected to the LN2 tank. The inlet and the outlet pressure and temperatures of the LN2 of the vaporizer are measured. The mass flow rate of the LN2 is also measured and collected. The comparison of the numerical solution is performed by these measured data. The ambient conditions of the experiment are given as boundary conditions to the numerical model. The surface tension and contact angle have a significant effect on the boiling of liquid nitrogen. Average heat transfer coefficient including convective and nucleated boiling components should be obtained for liquid nitrogen saturated flow boiling in the finned tube. Fluent CFD module is used to simulate the numerical solution. The turbulent k-ε model is taken to simulate the liquid nitrogen flow. The phase change is simulated by using the evaporation-condensation approach used with user-defined functions (UDF). The comparison of the numerical and experimental results will be shared in this study. Besides, the performance capacity of the star-shaped finned pipe vaporizer will be calculated in this study. Based on this numerical analysis, the performance of the vaporizer per unit length can be predicted for the industrial applications and the suitable pipe length of the vaporizer can be found for the special cases.Keywords: liquid nitrogen, numerical modeling, two-phase flow, cryogenics
Procedia PDF Downloads 119477 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations
Authors: Milena Nanova, Radul Shishkov, Damyan Damov, Martin Georgiev
Abstract:
This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper places emphasis on algorithmic implementation of the logical constraint and intricacies in residential architecture by exploring the potential of generative design to create visually engaging and contextually harmonious structures. This exploration also contains an analysis of how these designs align with legal building parameters, showcasing the potential for creative solutions within the confines of urban building regulations. Concurrently, our methodology integrates functional, economic, and environmental factors. We investigate how generative design can be utilized to optimize buildings' performance, considering them, aiming to achieve a symbiotic relationship between the built environment and its natural surroundings. Through a blend of theoretical research and practical case studies, this research highlights the multifaceted capabilities of generative design and demonstrates practical applications of our framework. Our findings illustrate the rich possibilities that arise from an algorithmic design approach in the context of a vibrant urban landscape. This study contributes an alternative perspective to residential architecture, suggesting that the future of urban development lies in embracing the complex interplay between computational design innovation, regulatory adherence, and environmental responsibility.Keywords: generative design, computational design, parametric design, algorithmic modeling
Procedia PDF Downloads 65476 Comparing Stability Index MAPping (SINMAP) Landslide Susceptibility Models in the Río La Carbonera, Southeast Flank of Pico de Orizaba Volcano, Mexico
Authors: Gabriel Legorreta Paulin, Marcus I. Bursik, Lilia Arana Salinas, Fernando Aceves Quesada
Abstract:
In volcanic environments, landslides and debris flows occur continually along stream systems of large stratovolcanoes. This is the case on Pico de Orizaba volcano, the highest mountain in Mexico. The volcano has a great potential to impact and damage human settlements and economic activities by landslides. People living along the lower valleys of Pico de Orizaba volcano are in continuous hazard by the coalescence of upstream landslide sediments that increased the destructive power of debris flows. These debris flows not only produce floods, but also cause the loss of lives and property. Although the importance of assessing such process, there is few landslide inventory maps and landslide susceptibility assessment. As a result in México, no landslide susceptibility models assessment has been conducted to evaluate advantage and disadvantage of models. In this study, a comprehensive study of landslide susceptibility models assessment using GIS technology is carried out on the SE flank of Pico de Orizaba volcano. A detailed multi-temporal landslide inventory map in the watershed is used as framework for the quantitative comparison of two landslide susceptibility maps. The maps are created based on 1) the Stability Index MAPping (SINMAP) model by using default geotechnical parameters and 2) by using findings of volcanic soils geotechnical proprieties obtained in the field. SINMAP combines the factor of safety derived from the infinite slope stability model with the theory of a hydrologic model to produce the susceptibility map. It has been claimed that SINMAP analysis is reasonably successful in defining areas that intuitively appear to be susceptible to landsliding in regions with sparse information. The validations of the resulting susceptibility maps are performed by comparing them with the inventory map under LOGISNET system which provides tools to compare by using a histogram and a contingency table. Results of the experiment allow for establishing how the individual models predict the landslide location, advantages, and limitations. The results also show that although the model tends to improve with the use of calibrated field data, the landslide susceptibility map does not perfectly represent existing landslides.Keywords: GIS, landslide, modeling, LOGISNET, SINMAP
Procedia PDF Downloads 313475 Explosive Clad Metals for Geothermal Energy Recovery
Authors: Heather Mroz
Abstract:
Geothermal fluids can provide a nearly unlimited source of renewable energy but are often highly corrosive due to dissolved carbon dioxide (CO2), hydrogen sulphide (H2S), Ammonia (NH3) and chloride ions. The corrosive environment drives material selection for many components, including piping, heat exchangers and pressure vessels, to higher alloys of stainless steel, nickel-based alloys and titanium. The use of these alloys is cost-prohibitive and does not offer the pressure rating of carbon steel. One solution, explosion cladding, has been proven to reduce the capital cost of the geothermal equipment while retaining the mechanical and corrosion properties of both the base metal and the cladded surface metal. Explosion cladding is a solid-state welding process that uses precision explosions to bond two dissimilar metals while retaining the mechanical, electrical and corrosion properties. The process is commonly used to clad steel with a thin layer of corrosion-resistant alloy metal, such as stainless steel, brass, nickel, silver, titanium, or zirconium. Additionally, explosion welding can join a wider array of compatible and non-compatible metals with more than 260 metal combinations possible. The explosion weld is achieved in milliseconds; therefore, no bulk heating occurs, and the metals experience no dilution. By adhering to a strict set of manufacturing requirements, both the shear strength and tensile strength of the bond will exceed the strength of the weaker metal, ensuring the reliability of the bond. For over 50 years, explosion cladding has been used in the oil and gas and chemical processing industries and has provided significant economic benefit in reduced maintenance and lower capital costs over solid construction. The focus of this paper will be on the many benefits of the use of explosion clad in process equipment instead of more expensive solid alloy construction. The method of clad-plate production with explosion welding as well as the methods employed to ensure sound bonding of the metals. It will also include the origins of explosion cladding as well as recent technological developments. Traditionally explosion clad plate was formed into vessels, tube sheets and heads but recent advances include explosion welded piping. The final portion of the paper will give examples of the use of explosion-clad metals in geothermal energy recovery. The classes of materials used for geothermal brine will be discussed, including stainless steels, nickel alloys and titanium. These examples will include heat exchangers (tube sheets), high pressure and horizontal separators, standard pressure crystallizers, piping and well casings. It is important to educate engineers and designers on material options as they develop equipment for geothermal resources. Explosion cladding is a niche technology that can be successful in many situations, like geothermal energy recovery, where high temperature, high pressure and corrosive environments are typical. Applications for explosion clad metals include vessel and heat exchanger components as well as piping.Keywords: clad metal, explosion welding, separator material, well casing material, piping material
Procedia PDF Downloads 154474 Erosion Influencing Factors Analysis: Case of Isser Watershed (North-West Algeria)
Authors: Chahrazed Salhi, Ayoub Zeroual, Yasmina Hamitouche
Abstract:
Soil water erosion poses a significant threat to the watersheds in Algeria today. The degradation of storage capacity in large dams over the past two decades, primarily due to erosion, necessitates a comprehensive understanding of the factors that contribute to soil erosion. The Isser watershed, located in the Northwestern region of Algeria, faces additional challenges such as recurrent droughts and the presence of delicate marl and clay outcrops, which amplify its susceptibility to water erosion. This study aims to employ advanced techniques such as Geographic Information Systems (GIS) and Remote Sensing (RS), in conjunction with the Canonical Correlation Analysis (CCA) method and Soil Water Assessment Tool (SWAT) model, to predict specific erosion patterns and analyze the key factors influencing erosion in the Isser basin. To accomplish this, an array of data sources including rainfall, climatic, hydrometric, land use, soil, digital elevation, and satellite data were utilized. The application of the SWAT model to the Isser basin yielded an average annual soil loss of approximately 16 t/ha/year. Particularly high erosion rates, exceeding 12 T/ha/year, were observed in the central and southern parts of the basin, encompassing 41% of the total basin area. Through Canonical Correlation Analysis, it was determined that vegetation cover and topography exerted the most substantial influence on erosion. Consequently, the study identified significant and spatially heterogeneous erosion throughout the study area. The impact of land topography on soil loss was found to be directly proportional, while vegetation cover exhibited an inverse proportional relationship. Modeling specific erosion for the Ladrat dam sub-basin estimated a rate of around 39 T/ha/year, thus accounting for the recorded capacity loss of 17.80% compared to the bathymetric survey conducted in 2019. The findings of this research provide valuable decision-support tools for soil conservation managers, empowering them to make informed decisions regarding soil conservation measures.Keywords: Isser watershed, RS, CCA, SWAT, vegetation cover, topography
Procedia PDF Downloads 71473 Impacts of Present and Future Climate Variability on Forest Ecosystem in Mediterranean Region
Authors: Orkan Ozcan, Nebiye Musaoglu, Murat Turkes
Abstract:
Climate change is largely recognized as one of the real, pressing and significant global problems. The concept of ‘climate change vulnerability’ helps us to better comprehend the cause/effect relationships behind climate change and its impact on human societies, socioeconomic sectors, physiographical and ecological systems. In this study, multifactorial spatial modeling was applied to evaluate the vulnerability of a Mediterranean forest ecosystem to climate change. As a result, the geographical distribution of the final Environmental Vulnerability Areas (EVAs) of the forest ecosystem is based on the estimated final Environmental Vulnerability Index (EVI) values. This revealed that at current levels of environmental degradation, physical, geographical, policy enforcement and socioeconomic conditions, the area with a ‘very low’ vulnerability degree covered mainly the town, its surrounding settlements and the agricultural lands found mainly over the low and flat travertine plateau and the plains at the east and southeast of the district. The spatial magnitude of the EVAs over the forest ecosystem under the current environmental degradation was also determined. This revealed that the EVAs classed as ‘very low’ account for 21% of the total area of the forest ecosystem, those classed as ‘low’ account for 36%, those classed as ‘medium’ account for 20%, and those classed as ‘high’ account for 24%. Based on regionally averaged future climate assessments and projected future climate indicators, both the study site and the western Mediterranean sub-region of Turkey will probably become associated with a drier, hotter, more continental and more water-deficient climate. This analysis holds true for all future scenarios, with the exception of RCP4.5 for the period from 2015 to 2030. However, the present dry-sub humid climate dominating this sub-region and the study area shows a potential for change towards more dry climatology and for it to become a semiarid climate in the period between 2031 and 2050 according to the RCP8.5 high emission scenario. All the observed and estimated results and assessments summarized in the study show clearly that the densest forest ecosystem in the southern part of the study site, which is characterized by mainly Mediterranean coniferous and some mixed forest and the maquis vegetation, will very likely be influenced by medium and high degrees of vulnerability to future environmental degradation, climate change and variability.Keywords: forest ecosystem, Mediterranean climate, RCP scenarios, vulnerability analysis
Procedia PDF Downloads 352472 Numerical Simulation of the Production of Ceramic Pigments Using Microwave Radiation: An Energy Efficiency Study Towards the Decarbonization of the Pigment Sector
Authors: Pedro A. V. Ramos, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
Global warming mitigation is one of the main challenges of this century, having the net balance of greenhouse gas (GHG) emissions to be null or negative in 2050. Industry electrification is one of the main paths to achieving carbon neutrality within the goals of the Paris Agreement. Microwave heating is becoming a popular industrial heating mechanism due to the absence of direct GHG emissions, but also the rapid, volumetric, and efficient heating. In the present study, a mathematical model is used to simulate the production using microwave heating of two ceramic pigments, at high temperatures (above 1200 Celsius degrees). The two pigments studied were the yellow (Pr, Zr)SiO₂ and the brown (Ti, Sb, Cr)O₂. The chemical conversion of reactants into products was included in the model by using the kinetic triplet obtained with the model-fitting method and experimental data present in the Literature. The coupling between the electromagnetic, thermal, and chemical interfaces was also included. The simulations were computed in COMSOL Multiphysics. The geometry includes a moving plunger to allow for the cavity impedance matching and thus maximize the electromagnetic efficiency. To accomplish this goal, a MATLAB controller was developed to automatically search the position of the moving plunger that guarantees the maximum efficiency. The power is automatically and permanently adjusted during the transient simulation to impose stationary regime and total conversion, the two requisites of every converged solution. Both 2D and 3D geometries were used and a parametric study regarding the axial bed velocity and the heat transfer coefficient at the boundaries was performed. Moreover, a Verification and Validation study was carried out by comparing the conversion profiles obtained numerically with the experimental data available in the Literature; the numerical uncertainty was also estimated to attest to the result's reliability. The results show that the model-fitting method employed in this work is a suitable tool to predict the chemical conversion of reactants into the pigment, showing excellent agreement between the numerical results and the experimental data. Moreover, it was demonstrated that higher velocities lead to higher thermal efficiencies and thus lower energy consumption during the process. This work concludes that the electromagnetic heating of materials having high loss tangent and low thermal conductivity, like ceramic materials, maybe a challenge due to the presence of hot spots, which may jeopardize the product quality or even the experimental apparatus. The MATLAB controller increased the electromagnetic efficiency by 25% and global efficiency of 54% was obtained for the titanate brown pigment. This work shows that electromagnetic heating will be a key technology in the decarbonization of the ceramic sector as reductions up to 98% in the specific GHG emissions were obtained when compared to the conventional process. Furthermore, numerical simulations appear as a suitable technique to be used in the design and optimization of microwave applicators, showing high agreement with experimental data.Keywords: automatic impedance matching, ceramic pigments, efficiency maximization, high-temperature microwave heating, input power control, numerical simulation
Procedia PDF Downloads 138471 Reasons to Redesign: Teacher Education for a Brighter Tomorrow
Authors: Deborah L. Smith
Abstract:
To review our program and determine the best redesign options, department members gathered feedback and input through focus groups, analysis of data, and a review of the current research to ensure that the changes proposed were not based solely on the state’s new professional standards. In designing course assignments and assessments, we listened to a variety of constituents, including students, other institutions of higher learning, MDE webinars, host teachers, literacy clinic personnel, and other disciplinary experts. As a result, we are designing a program that is more inclusive of a variety of field experiences for growth. We have determined ways to improve our program by connecting academic disciplinary knowledge, educational psychology, and community building both inside and outside the classroom for professional learning communities. The state’s release of new professional standards led my department members to question what is working and what needs improvement in our program. One aspect of our program that continues to be supported by research and data analysis is the function of supervised field experiences with meaningful feedback. We seek to expand in this area. Other data indicate that we have strengths in modeling a variety of approaches such as cooperative learning, discussions, literacy strategies, and workshops. In the new program, field assignments will be connected to multiple courses, and efforts to scaffold student learning to guide them toward best evidence-based practices will be continuous. Despite running a program that meets multiple sets of standards, there are areas of need that we directly address in our redesign proposal. Technology is ever-changing, so it’s inevitable that improving digital skills is a focus. In addition, scaffolding procedures for English Language Learners (ELL) or other students who struggle is imperative. Diversity, equity, and inclusion (DEI) has been an integral part of our curriculum, but the research indicates that more self-reflection and a deeper understanding of culturally relevant practices would help the program improve. Connections with professional learning communities will be expanded, as will leadership components, so that teacher candidates understand their role in changing the face of education. A pilot program will run in academic year 22/23, and additional data will be collected each semester through evaluations and continued program review.Keywords: DEI, field experiences, program redesign, teacher preparation
Procedia PDF Downloads 169470 Parameter Fitting of the Discrete Element Method When Modeling the DISAMATIC Process
Authors: E. Hovad, J. H. Walther, P. Larsen, J. Thorborg, J. H. Hattel
Abstract:
In sand casting of metal parts for the automotive industry such as brake disks and engine blocks, the molten metal is poured into a sand mold to get its final shape. The DISAMATIC molding process is a way to construct these sand molds for casting of steel parts and in the present work numerical simulations of this process are presented. During the process green sand is blown into a chamber and subsequently squeezed to finally obtain the sand mould. The sand flow is modelled with the Discrete Element method (DEM) and obtaining the correct material parameters for the simulation is the main goal. Different tests will be used to find or calibrate the DEM parameters needed; Poisson ratio, Young modulus, rolling friction coefficient, sliding friction coefficient and coefficient of restitution (COR). The Young modulus and Poisson ratio are found from compression tests of the bulk material and subsequently used in the DEM model according to the Hertz-Mindlin model. The main focus will be on calibrating the rolling resistance and sliding friction in the DEM model with respect to the behavior of “real” sand piles. More specifically, the surface profile of the “real” sand pile will be compared to the sand pile predicted with the DEM for different values of the rolling and sliding friction coefficients. When the DEM parameters are found for the particle-particle (sand-sand) interaction, the particle-wall interaction parameter values are also found. Here the sliding coefficient will be found from experiments and the rolling resistance is investigated by comparing with observations of how the green sand interacts with the chamber wall during experiments and the DEM simulations will be calibrated accordingly. The coefficient of restitution will be tested with different values in the DEM simulations and compared to video footages of the DISAMATIC process. Energy dissipation will be investigated in these simulations for different particle sizes and coefficient of restitution, where scaling laws will be considered to relate the energy dissipation for these parameters. Finally, the found parameter values are used in the overall discrete element model and compared to the video footage of the DISAMATIC process.Keywords: discrete element method, physical properties of materials, calibration, granular flow
Procedia PDF Downloads 482469 Understanding Hydrodynamic in Lake Victoria Basin in a Catchment Scale: A Literature Review
Authors: Seema Paul, John Mango Magero, Prosun Bhattacharya, Zahra Kalantari, Steve W. Lyon
Abstract:
The purpose of this review paper is to develop an understanding of lake hydrodynamics and the potential climate impact on the Lake Victoria (LV) catchment scale. This paper briefly discusses the main problems of lake hydrodynamics and its’ solutions that are related to quality assessment and climate effect. An empirical methodology in modeling and mapping have considered for understanding lake hydrodynamic and visualizing the long-term observational daily, monthly, and yearly mean dataset results by using geographical information system (GIS) and Comsol techniques. Data were obtained for the whole lake and five different meteorological stations, and several geoprocessing tools with spatial analysis are considered to produce results. The linear regression analyses were developed to build climate scenarios and a linear trend on lake rainfall data for a long period. A potential evapotranspiration rate has been described by the MODIS and the Thornthwaite method. The rainfall effect on lake water level observed by Partial Differential Equations (PDE), and water quality has manifested by a few nutrients parameters. The study revealed monthly and yearly rainfall varies with monthly and yearly maximum and minimum temperatures, and the rainfall is high during cool years and the temperature is high associated with below and average rainfall patterns. Rising temperatures are likely to accelerate evapotranspiration rates and more evapotranspiration is likely to lead to more rainfall, drought is more correlated with temperature and cloud is more correlated with rainfall. There is a trend in lake rainfall and long-time rainfall on the lake water surface has affected the lake level. The onshore and offshore have been concentrated by initial literature nutrients data. The study recommended that further studies should consider fully lake bathymetry development with flow analysis and its’ water balance, hydro-meteorological processes, solute transport, wind hydrodynamics, pollution and eutrophication these are crucial for lake water quality, climate impact assessment, and water sustainability.Keywords: climograph, climate scenarios, evapotranspiration, linear trend flow, rainfall event on LV, concentration
Procedia PDF Downloads 99468 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction
Authors: Yan Zhang
Abstract:
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.Keywords: Internet of Things, machine learning, predictive maintenance, streaming data
Procedia PDF Downloads 386467 Analysis and Optimized Design of a Packaged Liquid Chiller
Authors: Saeed Farivar, Mohsen Kahrom
Abstract:
The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.Keywords: optimization, packaged liquid chiller, performance, simulation
Procedia PDF Downloads 278466 Estimation of the Exergy-Aggregated Value Generated by a Manufacturing Process Using the Theory of the Exergetic Cost
Authors: German Osma, Gabriel Ordonez
Abstract:
The production of metal-rubber spares for vehicles is a sequential process that consists in the transformation of raw material through cutting activities and chemical and thermal treatments, which demand electricity and fossil fuels. The energy efficiency analysis for these cases is mostly focused on studying of each machine or production step, but is not common to study of the quality of the production process achieves from aggregated value viewpoint, which can be used as a quality measurement for determining of impact on the environment. In this paper, the theory of exergetic cost is used for determining of aggregated exergy to three metal-rubber spares, from an exergy analysis and thermoeconomic analysis. The manufacturing processing of these spares is based into batch production technique, and therefore is proposed the use of this theory for discontinuous flows from of single models of workstations; subsequently, the complete exergy model of each product is built using flowcharts. These models are a representation of exergy flows between components into the machines according to electrical, mechanical and/or thermal expressions; they determine the demanded exergy to produce the effective transformation in raw materials (aggregated exergy value), the exergy losses caused by equipment and irreversibilities. The energy resources of manufacturing process are electricity and natural gas. The workstations considered are lathes, punching presses, cutters, zinc machine, chemical treatment tanks, hydraulic vulcanizing presses and rubber mixer. The thermoeconomic analysis was done by workstation and by spare; first of them describes the operation of the components of each machine and where the exergy losses are; while the second of them estimates the exergy-aggregated value for finished product and wasted feedstock. Results indicate that exergy efficiency of a mechanical workstation is between 10% and 60% while this value in the thermal workstations is less than 5%; also that each effective exergy-aggregated value is one-thirtieth of total exergy required for operation of manufacturing process, which amounts approximately to 2 MJ. These troubles are caused mainly by technical limitations of machines, oversizing of metal feedstock that demands more mechanical transformation work, and low thermal insulation of chemical treatment tanks and hydraulic vulcanizing presses. From established information, in this case, it is possible to appreciate the usefulness of theory of exergetic cost for analyzing of aggregated value in manufacturing processes.Keywords: exergy-aggregated value, exergy efficiency, thermoeconomics, exergy modeling
Procedia PDF Downloads 170465 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis
Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone
Abstract:
The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21◦C and 25◦C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.Keywords: dehumidification, nodal calculation, radiant cooling panel, system sizing
Procedia PDF Downloads 175464 Instant Data-Driven Robotics Fabrication of Light-Transmitting Ceramics: A Responsive Computational Modeling Workflow
Authors: Shunyi Yang, Jingjing Yan, Siyu Dong, Xiangguo Cui
Abstract:
Current architectural façade design practices incorporate various daylighting and solar radiation analysis methods. These emphasize the impact of geometry on façade design. There is scope to extend this knowledge into methods that address material translucency, porosity, and form. Such approaches can also achieve these conditions through adaptive robotic manufacturing approaches that exploit material dynamics within the design, and alleviate fabrication waste from molds, ultimately accelerating the autonomous manufacturing system. Besides analyzing the environmental solar radiant in building facade design, there is also a vacancy research area of how lighting effects can be precisely controlled by engaging the instant real-time data-driven robot control and manipulating the material properties. Ceramics carries a wide range of transmittance and deformation potentials for robotics control with the research of its material property. This paper presents one semi-autonomous system that engages with real-time data-driven robotics control, hardware kit design, environmental building studies, human interaction, and exploratory research and experiments. Our objectives are to investigate the relationship between different clay bodies or ceramics’ physio-material properties and their transmittance; to explore the feedback system of instant lighting data in robotic fabrication to achieve precise lighting effect; to design the sufficient end effector and robot behaviors for different stages of deformation. We experiment with architectural clay, as the material of the façade that is potentially translucent at a certain stage can respond to light. Studying the relationship between form, material properties, and porosity can help create different interior and exterior light effects and provide façade solutions for specific architectural functions. The key idea is to maximize the utilization of in-progress robotics fabrication and ceramics materiality to create a highly integrated autonomous system for lighting facade design and manufacture.Keywords: light transmittance, data-driven fabrication, computational design, computer vision, gamification for manufacturing
Procedia PDF Downloads 123463 Algae Biofertilizers Promote Sustainable Food Production and Nutrient Efficiency: An Integrated Empirical-Modeling Study
Authors: Zeenat Rupawalla, Nicole Robinson, Susanne Schmidt, Sijie Li, Selina Carruthers, Elodie Buisset, John Roles, Ben Hankamer, Juliane Wolf
Abstract:
Agriculture has radically changed the global biogeochemical cycle of nitrogen (N). Fossil fuel-enabled synthetic N-fertiliser is a foundation of modern agriculture but applied to soil crops only use about half of it. To address N-pollution from cropping and the large carbon and energy footprint of N-fertiliser synthesis, new technologies delivering enhanced energy efficiency, decarbonisation, and a circular nutrient economy are needed. We characterised algae fertiliser (AF) as an alternative to synthetic N-fertiliser (SF) using empirical and modelling approaches. We cultivated microalgae in nutrient solution and modelled up-scaled production in nutrient-rich wastewater. Over four weeks, AF released 63.5% of N as ammonium and nitrate, and 25% of phosphorous (P) as phosphate to the growth substrate, while SF released 100% N and 20% P. To maximise crop N-use and minimise N-leaching, we explored AF and SF dose-response-curves with spinach in glasshouse conditions. AF-grown spinach produced 36% less biomass than SF-grown plants due to AF’s slower and linear N-release, while SF resulted in 5-times higher N-leaching loss than AF. Optimised blends of AF and SF boosted crop yield and minimised N-loss due to greater synchrony of N-release and crop uptake. Additional benefits of AF included greener leaves, lower leaf nitrate concentration, and higher microbial diversity and water holding capacity in the growth substrate. Life-cycle-analysis showed that replacing the most effective SF dosage with AF lowered the carbon footprint of fertiliser production from 2.02 g CO₂ (C-producing) to -4.62 g CO₂ (C-sequestering), with a further 12% reduction when AF is produced on wastewater. Embodied energy was lowest for AF-SF blends and could be reduced by 32% when cultivating algae on wastewater. We conclude that (i) microalgae offer a sustainable alternative to synthetic N-fertiliser in spinach production and potentially other crop systems, and (ii) microalgae biofertilisers support the circular nutrient economy and several sustainable development goals.Keywords: bioeconomy, decarbonisation, energy footprint, microalgae
Procedia PDF Downloads 137462 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy
Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai
Abstract:
Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy
Procedia PDF Downloads 141461 The Numerical Model of the Onset of Acoustic Oscillation in Pulse Tube Engine
Authors: Alexander I. Dovgyallo, Evgeniy A. Zinoviev, Svetlana O. Nekrasova
Abstract:
The most of works applied for the pulse tube converters contain the workflow description implemented through the use of mathematical models on stationary modes. However, the study of the thermoacoustic systems unsteady behavior in the start, stop, and acoustic load changes modes is in the particular interest. The aim of the present study was to develop a mathematical thermal excitation model of acoustic oscillations in pulse tube engine (PTE) as a small-scale scheme of pulse tube engine operating at atmospheric air. Unlike some previous works this standing wave configuration is a fully closed system. The improvements over previous mathematical models are the following: the model allows specifying any values of porosity for regenerator, takes into account the piston weight and the friction in the cylinder and piston unit, and determines the operating frequency. The numerical method is based on the relation equations between the pressure and volume velocity variables at the ends of each element of PTE which is recorded through the appropriate transformation matrix. A solution demonstrates that the PTE operation frequency is the complex value, and it depends on the piston mass and the dynamic friction due to its movement in the cylinder. On the basis of the determined frequency thermoacoustically induced heat transport and generation of acoustic power equations were solved for channel with temperature gradient on its ends. The results of numerical simulation demonstrate the features of the initialization process of oscillation and show that that generated acoustic power more than power on the steady mode in a factor of 3…4. But doesn`t mean the possibility of its further continuous utilizing due to its existence only in transient mode which lasts only for a 30-40 sec. The experiments were carried out on small-scale PTE. The results shows that the value of acoustic power is in the range of 0.7..1.05 W for the defined frequency range f = 13..18 Hz and pressure amplitudes 11..12 kPa. These experimental data are satisfactorily correlated with the numerical modeling results. The mathematical model can be straightforwardly applied for the thermoacoustic devices with variable temperatures of thermal reservoirs and variable transduction loads which are expected to occur in practical implementations of portable thermoacoustic engines.Keywords: nonlinear processes, pulse tube engine, thermal excitation, standing wave
Procedia PDF Downloads 376460 Quantified Metabolomics for the Determination of Phenotypes and Biomarkers across Species in Health and Disease
Authors: Miroslava Cuperlovic-Culf, Lipu Wang, Ketty Boyle, Nadine Makley, Ian Burton, Anissa Belkaid, Mohamed Touaibia, Marc E. Surrette
Abstract:
Metabolic changes are one of the major factors in the development of a variety of diseases in various species. Metabolism of agricultural plants is altered the following infection with pathogens sometimes contributing to resistance. At the same time, pathogens use metabolites for infection and progression. In humans, metabolism is a hallmark of cancer development for example. Quantified metabolomics data combined with other omics or clinical data and analyzed using various unsupervised and supervised methods can lead to better diagnosis and prognosis. It can also provide information about resistance as well as contribute knowledge of compounds significant for disease progression or prevention. In this work, different methods for metabolomics quantification and analysis from Nuclear Magnetic Resonance (NMR) measurements that are used for investigation of disease development in wheat and human cells will be presented. One-dimensional 1H NMR spectra are used extensively for metabolic profiling due to their high reliability, wide range of applicability, speed, trivial sample preparation and low cost. This presentation will describe a new method for metabolite quantification from NMR data that combines alignment of spectra of standards to sample spectra followed by multivariate linear regression optimization of spectra of assigned metabolites to samples’ spectra. Several different alignment methods were tested and multivariate linear regression result has been compared with other quantification methods. Quantified metabolomics data can be analyzed in the variety of ways and we will present different clustering methods used for phenotype determination, network analysis providing knowledge about the relationships between metabolites through metabolic network as well as biomarker selection providing novel markers. These analysis methods have been utilized for the investigation of fusarium head blight resistance in wheat cultivars as well as analysis of the effect of estrogen receptor and carbonic anhydrase activation and inhibition on breast cancer cell metabolism. Metabolic changes in spikelet’s of wheat cultivars FL62R1, Stettler, MuchMore and Sumai3 following fusarium graminearum infection were explored. Extensive 1D 1H and 2D NMR measurements provided information for detailed metabolite assignment and quantification leading to possible metabolic markers discriminating resistance level in wheat subtypes. Quantification data is compared to results obtained using other published methods. Fusarium infection induced metabolic changes in different wheat varieties are discussed in the context of metabolic network and resistance. Quantitative metabolomics has been used for the investigation of the effect of targeted enzyme inhibition in cancer. In this work, the effect of 17 β -estradiol and ferulic acid on metabolism of ER+ breast cancer cells has been compared to their effect on ER- control cells. The effect of the inhibitors of carbonic anhydrase on the observed metabolic changes resulting from ER activation has also been determined. Metabolic profiles were studied using 1D and 2D metabolomic NMR experiments, combined with the identification and quantification of metabolites, and the annotation of the results is provided in the context of biochemical pathways.Keywords: metabolic biomarkers, metabolic network, metabolomics, multivariate linear regression, NMR quantification, quantified metabolomics, spectral alignment
Procedia PDF Downloads 338459 Engineering Analysis for Fire Safety Using Computational Fluid Dynamic (CFD)
Authors: Munirajulu M, Srikanth Modem
Abstract:
A large cricket stadium with the capacity to accommodate several thousands of spectators has the seating arena consisting of a two-tier arrangement with an upper and a lower bowl and an intermediate concourse podium level for pedestrian movement to access the bowls. The uniqueness of the stadium is that spectators can have an unobstructed view from all around the podium towards the field of play. Upper and lower bowls are connected by stairs. The stairs landing is a precast slab supported by cantilevered steel beams. These steel beams are fixed to precast columns supporting the stadium structure. The stair slabs are precast concrete supported on a landing slab and cantilevered steel beams. During an event of a fire at podium level between two staircases, fire resistance of steel beams is very critical to life safety. If the steel beam loses its strength due to lack of fire resistance, it will be weak in supporting stair slabs and may lead to a hazard in evacuating occupants from the upper bowl to the lower bowl. In this study, to ascertain fire rating and life safety, a performance-based design using CFD analysis is used to evaluate the steel beams' fire resistance. A fire size of 3.5 MW (convective heat output of fire) with a wind speed of 2.57 m/s is considered for fire and smoke simulation. CFD results show that the smoke temperature near the staircase/ around the staircase does not exceed 1500 C for the fire duration considered. The surface temperature of cantilevered steel beams is found to be less than or equal to 1500 C. Since this temperature is much less than the critical failure temperature of steel (5200 C), it is concluded that the design of structural steel supports on the staircase is adequate and does not need additional fire protection such as fire-resistant coating. CFD analysis provided an engineering basis for the performance-based design of steel structural elements and an opportunity to optimize fire protection requirements. Thus, performance-based design using CFD modeling and simulation of fire and smoke is an innovative way to evaluate fire rating requirements, ascertain life safety and optimize the design with regard to fire protection on structural steel elements.Keywords: fire resistance, life safety, performance-based design, CFD analysis
Procedia PDF Downloads 192