Search results for: iterative computation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 864

Search results for: iterative computation

114 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 134
113 Recursion, Merge and Event Sequence: A Bio-Mathematical Perspective

Authors: Noury Bakrim

Abstract:

Formalization is indeed a foundational Mathematical Linguistics as demonstrated by the pioneering works. While dialoguing with this frame, we nonetheless propone, in our approach of language as a real object, a mathematical linguistics/biosemiotics defined as a dialectical synthesis between induction and computational deduction. Therefore, relying on the parametric interaction of cycles, rules, and features giving way to a sub-hypothetic biological point of view, we first hypothesize a factorial equation as an explanatory principle within Category Mathematics of the Ergobrain: our computation proposal of Universal Grammar rules per cycle or a scalar determination (multiplying right/left columns of the determinant matrix and right/left columns of the logarithmic matrix) of the transformable matrix for rule addition/deletion and cycles within representational mapping/cycle heredity basing on the factorial example, being the logarithmic exponent or power of rule deletion/addition. It enables us to propone an extension of minimalist merge/label notions to a Language Merge (as a computing principle) within cycle recursion relying on combinatorial mapping of rules hierarchies on external Entax of the Event Sequence. Therefore, to define combinatorial maps as language merge of features and combinatorial hierarchical restrictions (governing, commanding, and other rules), we secondly hypothesize from our results feature/hierarchy exponentiation on graph representation deriving from Gromov's Symbolic Dynamics where combinatorial vertices from Fe are set to combinatorial vertices of Hie and edges from Fe to Hie such as for all combinatorial group, there are restriction maps representing different derivational levels that are subgraphs: the intersection on I defines pullbacks and deletion rules (under restriction maps) then under disjunction edges H such that for the combinatorial map P belonging to Hie exponentiation by intersection there are pullbacks and projections that are equal to restriction maps RM₁ and RM₂. The model will draw on experimental biomathematics as well as structural frames with focus on Amazigh and English (cases from phonology/micro-semantics, Syntax) shift from Structure to event (especially Amazigh formant principle resolving its morphological heterogeneity).

Keywords: rule/cycle addition/deletion, bio-mathematical methodology, general merge calculation, feature exponentiation, combinatorial maps, event sequence

Procedia PDF Downloads 101
112 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways

Authors: Anirudh Lahiri

Abstract:

Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.

Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.

Procedia PDF Downloads 5
111 Additive Manufacturing – Application to Next Generation Structured Packing (SpiroPak)

Authors: Biao Sun, Tejas Bhatelia, Vishnu Pareek, Ranjeet Utikar, Moses Tadé

Abstract:

Additive manufacturing (AM), commonly known as 3D printing, with the continuing advances in parallel processing and computational modeling, has created a paradigm shift (with significant radical thinking) in the design and operation of chemical processing plants, especially LNG plants. With the rising energy demands, environmental pressures, and economic challenges, there is a continuing industrial need for disruptive technologies such as AM, which possess capabilities that can drastically reduce the cost of manufacturing and operations of chemical processing plants in the future. However, the continuing challenge for 3D printing is its lack of adaptability in re-designing the process plant equipment coupled with the non-existent theory or models that could assist in selecting the optimal candidates out of the countless potential fabrications that are possible using AM. One of the most common packings used in the LNG process is structured packing in the packed column (which is a unit operation) in the process. In this work, we present an example of an optimum strategy for the application of AM to this important unit operation. Packed columns use a packing material through which the gas phase passes and comes into contact with the liquid phase flowing over the packing, typically performing the necessary mass transfer to enrich the products, etc. Structured packing consists of stacks of corrugated sheets, typically inclined between 40-70° from the plane. Computational Fluid Dynamics (CFD) was used to test and model various geometries to study the governing hydrodynamic characteristics. The results demonstrate that the costly iterative experimental process can be minimized. Furthermore, they also improve the understanding of the fundamental physics of the system at the multiscale level. SpiroPak, patented by Curtin University, represents an innovative structured packing solution currently at a technology readiness level (TRL) of 5~6. This packing exhibits remarkable characteristics, offering a substantial increase in surface area while significantly enhancing hydrodynamic and mass transfer performance. Recent studies have revealed that SpiroPak can reduce pressure drop by 50~70% compared to commonly used commercial packings, and it can achieve 20~50% greater mass transfer efficiency (particularly in CO2 absorption applications). The implementation of SpiroPak has the potential to reduce the overall size of columns and decrease power consumption, resulting in cost savings for both capital expenditure (CAPEX) and operational expenditure (OPEX) when applied to retrofitting existing systems or incorporated into new processes. Furthermore, pilot to large-scale tests is currently underway to further advance and refine this technology.

Keywords: Additive Manufacturing (AM), 3D printing, Computational Fluid Dynamics (CFD, structured packing (SpiroPak)

Procedia PDF Downloads 38
110 A Lightweight Blockchain: Enhancing Internet of Things Driven Smart Buildings Scalability and Access Control Using Intelligent Direct Acyclic Graph Architecture and Smart Contracts

Authors: Syed Irfan Raza Naqvi, Zheng Jiangbin, Ahmad Moshin, Pervez Akhter

Abstract:

Currently, the IoT system depends on a centralized client-servant architecture that causes various scalability and privacy vulnerabilities. Distributed ledger technology (DLT) introduces a set of opportunities for the IoT, which leads to practical ideas for existing components at all levels of existing architectures. Blockchain Technology (BCT) appears to be one approach to solving several IoT problems, like Bitcoin (BTC) and Ethereum, which offer multiple possibilities. Besides, IoTs are resource-constrained devices with insufficient capacity and computational overhead to process blockchain consensus mechanisms; the traditional BCT existing challenge for IoTs is poor scalability, energy efficiency, and transaction fees. IOTA is a distributed ledger based on Direct Acyclic Graph (DAG) that ensures M2M micro-transactions are free of charge. IOTA has the potential to address existing IoT-related difficulties such as infrastructure scalability, privacy and access control mechanisms. We proposed an architecture, SLDBI: A Scalable, lightweight DAG-based Blockchain Design for Intelligent IoT Systems, which adapts the DAG base Tangle and implements a lightweight message data model to address the IoT limitations. It enables the smooth integration of new IoT devices into a variety of apps. SLDBI enables comprehensive access control, energy efficiency, and scalability in IoT ecosystems by utilizing the Masked Authentication Message (MAM) protocol and the IOTA Smart Contract Protocol (ISCP). Furthermore, we suggest proof-of-work (PoW) computation on the full node in an energy-efficient way. Experiments have been carried out to show the capability of a tangle to achieve better scalability while maintaining energy efficiency. The findings show user access control management at granularity levels and ensure scale up to massive networks with thousands of IoT nodes, such as Smart Connected Buildings (SCBDs).

Keywords: blockchain, IOT, direct acyclic graphy, scalability, access control, architecture, smart contract, smart connected buildings

Procedia PDF Downloads 91
109 Embracing the Uniqueness and Potential of Each Child: Moving Theory to Practice

Authors: Joy Chadwick

Abstract:

This Study of Teaching and Learning (SoTL) research focused on the experiences of teacher candidates involved in an inclusive education methods course within a four-year direct entry Bachelor of Education program. The placement of this course within the final fourteen-week practicum semester is designed to facilitate deeper theory-practice connections between effective inclusive pedagogical knowledge and the real life of classroom teaching. The course focuses on supporting teacher candidates to understand that effective instruction within an inclusive classroom context must be intentional, responsive, and relational. Diversity is situated not as exceptional but rather as expected. This interpretive qualitative study involved the analysis of twenty-nine teacher candidate reflective journals and six individual teacher candidate semi-structured interviews. The journal entries were completed at the start of the semester and at the end of the semester with the intent of having teacher candidates reflect on their beliefs of what it means to be an effective inclusive educator and how the course and practicum experiences impacted their understanding and approaches to teaching in inclusive classrooms. The semi-structured interviews provided further depth and context to the journal data. The journals and interview transcripts were coded and themed using NVivo software. The findings suggest that instructional frameworks such as universal design for learning (UDL), differentiated instruction (DI), response to intervention (RTI), social emotional learning (SEL), and self-regulation supported teacher candidate’s abilities to meet the needs of their students more effectively. Course content that focused on specific exceptionalities also supported teacher candidates to be proactive rather than reactive when responding to student learning challenges. Teacher candidates also articulated the importance of reframing their perspective about students in challenging moments and that seeing the individual worth of each child was integral to their approach to teaching. A persisting question for teacher educators exists as to what pedagogical knowledge and understanding is most relevant in supporting future teachers to be effective at planning for and embracing the diversity of student needs within classrooms today. This research directs us to consider the critical importance of addressing personal attributes and mindsets of teacher candidates regarding children as well as considering instructional frameworks when designing coursework. Further, the alignment of an inclusive education course during a teaching practicum allows for an iterative approach to learning. The practical application of course concepts while teaching in a practicum allows for a deeper understanding of instructional frameworks, thus enhancing the confidence of teacher candidates. Research findings have implications for teacher education programs as connected to inclusive education methods courses, practicum experiences, and overall teacher education program design.

Keywords: inclusion, inclusive education, pre-service teacher education, practicum experiences, teacher education

Procedia PDF Downloads 46
108 Introduction to Two Artificial Boundary Conditions for Transient Seepage Problems and Their Application in Geotechnical Engineering

Authors: Shuang Luo, Er-Xiang Song

Abstract:

Many problems in geotechnical engineering, such as foundation deformation, groundwater seepage, seismic wave propagation and geothermal transfer problems, may involve analysis in the ground which can be seen as extending to infinity. To that end, consideration has to be given regarding how to deal with the unbounded domain to be analyzed by using numerical methods, such as finite element method (FEM), finite difference method (FDM) or finite volume method (FVM). A simple artificial boundary approach derived from the analytical solutions for transient radial seepage problems, is introduced. It should be noted, however, that the analytical solutions used to derive the artificial boundary are particular solutions under certain boundary conditions, such as constant hydraulic head at the origin or constant pumping rate of the well. When dealing with unbounded domains with unsteady boundary conditions, a more sophisticated artificial boundary approach to deal with the infinity of the domain is presented. By applying Laplace transforms and introducing some specially defined auxiliary variables, the global artificial boundary conditions (ABCs) are simplified to local ones so that the computational efficiency is enhanced significantly. The introduced two local ABCs are implemented in a finite element computer program so that various seepage problems can be calculated. The two approaches are first verified by the computation of a one-dimensional radial flow problem, and then tentatively applied to more general two-dimensional cylindrical problems and plane problems. Numerical calculations show that the local ABCs can not only give good results for one-dimensional axisymmetric transient flow, but also applicable for more general problems, such as axisymmetric two-dimensional cylindrical problems, and even more general planar two-dimensional flow problems for well doublet and well groups. An important advantage of the latter local boundary is its applicability for seepage under rapidly changing unsteady boundary conditions, and even the computational results on the truncated boundary are usually quite satisfactory. In this aspect, it is superior over the former local boundary. Simulation of relatively long operational time demonstrates to certain extents the numerical stability of the local boundary. The solutions of the two local ABCs are compared with each other and with those obtained by using large element mesh, which proves the satisfactory performance and obvious superiority over the large mesh model.

Keywords: transient seepage, unbounded domain, artificial boundary condition, numerical simulation

Procedia PDF Downloads 278
107 Molecular Dynamics Simulation of Irradiation-Induced Damage Cascades in Graphite

Authors: Rong Li, Brian D. Wirth, Bing Liu

Abstract:

Graphite is the matrix, and structural material in the high temperature gas-cooled reactor exhibits an irradiation response. It is of significant importance to analyze the defect production and evaluate the role of graphite under irradiation. A vast experimental literature exists for graphite on the dimensional change, mechanical properties, and thermal behavior. However, simulations have not been applied to the atomistic perspective. Remarkably few molecular dynamics simulations have been performed to study the irradiation response in graphite. In this paper, irradiation-induced damage cascades in graphite were investigated with molecular dynamics simulation. Statistical results of the graphite defects were obtained by sampling a wide energy range (1–30 KeV) and 10 different runs for every cascade simulation with different random number generator seeds to the velocity scaling thermostat function. The chemical bonding in carbon was described using the adaptive intermolecular reactive empirical bond-order potential (AIREBO) potential coupled with the standard Ziegler–Biersack–Littmack (ZBL) potential to describe close-range pair interactions. This study focused on analyzing the number of defects, the final cascade morphology and the distribution of defect clusters in space, the length-scale cascade properties such as the cascade length and the range of primary knock-on atom (PKA), and graphite mechanical properties’ variation. It can be concluded that the number of surviving Frenkel pairs increased remarkably with the increasing initial PKA energy but did not exhibit a thermal spike at slightly lower energies in this paper. The PKA range and cascade length approximately linearly with energy which indicated that increasing the PKA initial energy will come at expensive computation cost such as 30KeV in this study. The cascade morphology and the distribution of defect clusters in space mainly related to the PKA energy meanwhile the temperature effect was relatively negligible. The simulations are in agreement with known experimental results and the Kinchin-Pease model, which can help to understand the graphite damage cascades and lifetime span under irradiation and provide a direction to the designs of these kinds of structural materials in the future reactors.

Keywords: graphite damage cascade, molecular dynamics, cascade morphology, cascade distribution

Procedia PDF Downloads 134
106 Interacting with Multi-Scale Structures of Online Political Debates by Visualizing Phylomemies

Authors: Quentin Lobbe, David Chavalarias, Alexandre Delanoe

Abstract:

The ICT revolution has given birth to an unprecedented world of digital traces and has impacted a wide number of knowledge-driven domains such as science, education or policy making. Nowadays, we are daily fueled by unlimited flows of articles, blogs, messages, tweets, etc. The internet itself can thus be considered as an unsteady hyper-textual environment where websites emerge and expand every day. But there are structures inside knowledge. A given text can always be studied in relation to others or in light of a specific socio-cultural context. By way of their textual traces, human beings are calling each other out: hypertext citations, retweets, vocabulary similarity, etc. We are in fact the architects of a giant web of elements of knowledge whose structures and shapes convey their own information. The global shapes of these digital traces represent a source of collective knowledge and the question of their visualization remains an opened challenge. How can we explore, browse and interact with such shapes? In order to navigate across these growing constellations of words and texts, interdisciplinary innovations are emerging at the crossroad between fields of social and computational sciences. In particular, complex systems approaches make it now possible to reconstruct the hidden structures of textual knowledge by means of multi-scale objects of research such as semantic maps and phylomemies. The phylomemy reconstruction is a generic method related to the co-word analysis framework. Phylomemies aim to reveal the temporal dynamics of large corpora of textual contents by performing inter-temporal matching on extracted knowledge domains in order to identify their conceptual lineages. This study aims to address the question of visualizing the global shapes of online political discussions related to the French presidential and legislative elections of 2017. We aim to build phylomemies on top of a dedicated collection of thousands of French political tweets enriched with archived contemporary news web articles. Our goal is to reconstruct the temporal evolution of online debates fueled by each political community during the elections. To that end, we want to introduce an iterative data exploration methodology implemented and tested within the free software Gargantext. There we combine synchronic and diachronic axis of visualization to reveal the dynamics of our corpora of tweets and web pages as well as their inner syntagmatic and paradigmatic relationships. In doing so, we aim to provide researchers with innovative methodological means to explore online semantic landscapes in a collaborative and reflective way.

Keywords: online political debate, French election, hyper-text, phylomemy

Procedia PDF Downloads 169
105 A One-Dimensional Model for Contraction in Burn Wounds: A Sensitivity Analysis and a Feasibility Study

Authors: Ginger Egberts, Fred Vermolen, Paul van Zuijlen

Abstract:

One of the common complications in post-burn scars is contractions. Depending on the extent of contraction and the wound dimensions, the contracture can cause a limited range-of-motion of joints. A one-dimensional morphoelastic continuum hypothesis-based model describing post-burn scar contractions is considered. The beauty of the one-dimensional model is the speed; hence it quickly yields new results and, therefore, insight. This model describes the movement of the skin and the development of the strain present. Besides these mechanical components, the model also contains chemical components that play a major role in the wound healing process. These components are fibroblasts, myofibroblasts, the so-called signaling molecules, and collagen. The dermal layer is modeled as an isotropic morphoelastic solid, and pulling forces are generated by myofibroblasts. The solution to the model equations is approximated by the finite-element method using linear basis functions. One of the major challenges in biomechanical modeling is the estimation of parameter values. Therefore, this study provides a comprehensive description of skin mechanical parameter values and a sensitivity analysis. Further, since skin mechanical properties change with aging, it is important that the model is feasible for predicting the development of contraction in burn patients of different ages, and hence this study provides a feasibility study. The variability in the solutions is caused by varying the values for some parameters simultaneously over the domain of computation, for which the results of the sensitivity analysis are used. The sensitivity analysis shows that the most sensitive parameters are the equilibrium concentration of collagen, the apoptosis rate of fibroblasts and myofibroblasts, and the secretion rate of signaling molecules. This suggests that most of the variability in the evolution of contraction in burns in patients of different ages might be caused mostly by the decreasing equilibrium of collagen concentration. As expected, the feasibility study shows this model can be used to show distinct extents of contractions in burns in patients of different ages. Nevertheless, contraction formation in children differs from contraction formation in adults because of the growth. This factor has not been incorporated in the model yet, and therefore the feasibility results for children differ from what is seen in the clinic.

Keywords: biomechanics, burns, feasibility, fibroblasts, morphoelasticity, sensitivity analysis, skin mechanics, wound contraction

Procedia PDF Downloads 131
104 Teaching Academic Writing for Publication: A Liminal Threshold Experience Towards Development of Scholarly Identity

Authors: Belinda du Plooy, Ruth Albertyn, Christel Troskie-De Bruin, Ella Belcher

Abstract:

In the academy, scholarliness or intellectual craftsmanship is considered the highest level of achievement, culminating in being consistently successfully published in impactful, peer-reviewed journals and books. Scholarliness implies rigorous methods, systematic exposition, in-depth analysis and evaluation, and the highest level of critical engagement and reflexivity. However, being a scholar does not happen automatically when one becomes an academic or completes graduate studies. A graduate qualification is an indication of one’s level of research competence but does not necessarily prepare one for the type of scholarly writing for publication required after a postgraduate qualification has been conferred. Scholarly writing for publication requires a high-level skillset and a specific mindset, which must be intentionally developed. The rite of passage to become a scholar is an iterative process with liminal spaces, thresholds, transitions, and transformations. The journey from researcher to published author is often fraught with rejection, insecurity, and disappointment and requires resilience and tenacity from those who eventually triumph. It cannot be achieved without support, guidance, and mentorship. In this article, the authors use collective auto-ethnography (CAE) to describe the phases and types of liminality encountered during the liminal journey toward scholarship. The authors speak as long-time facilitators of Writing for Academic Publication (WfAP) capacity development events (training workshops and writing retreats) presented at South African universities. Their WfAP facilitation practice is structured around experiential learning principles that allow them to act as critical reading partners and reflective witnesses for the writer-participants of their WfAP events. They identify three essential facilitation features for the effective holding of a generative, liminal, and transformational writing space for novice academic writers in order to enable their safe passage through the various liminal spaces they encounter during their scholarly development journey. These features are that facilitators should be agents of disruption and liminality while also guiding writers through these liminal spaces; that there should be a sense of mutual trust and respect, shared responsibility and accountability in order for writers to produce publication-worthy scholarly work; and that this can only be accomplished with the continued application of high levels of sensitivity and discernment by WfAP facilitators. These are key features for successful WfAP scholarship training events, where focused, individual input triggers personal and professional transformational experiences, which in turn translate into high-quality scholarly outputs.

Keywords: academic writing, liminality, scholarship, scholarliness, threshold experience, writing for publication

Procedia PDF Downloads 29
103 Laminar Periodic Vortex Shedding over a Square Cylinder in Pseudoplastic Fluid Flow

Authors: Shubham Kumar, Chaitanya Goswami, Sudipto Sarkar

Abstract:

Pseudoplastic (n < 1, n being the power index) fluid flow can be found in food, pharmaceutical and process industries and has very complex flow nature. To our knowledge, inadequate research work has been done in this kind of flow even at very low Reynolds numbers. Here, in the present computation, we have considered unsteady laminar flow over a square cylinder in pseudoplastic flow environment. For Newtonian fluid flow, this laminar vortex shedding range lies between Re = 47-180. In this problem, we consider Re = 100 (Re = U∞ a/ ν, U∞ is the free stream velocity of the flow, a is the side of the cylinder and ν is the kinematic viscosity of the fluid). The pseudoplastic fluid range has been chosen from close to the Newtonian fluid (n = 0.8) to very high pseudoplasticity (n = 0.1). The flow domain is constituted using Gambit 2.2.30 and this software is also used to generate mesh and to impose the boundary conditions. For all places, the domain size is considered as 36a × 16a with 280 ×192 grid point in the streamwise and flow normal directions respectively. The domain and the grid points are selected after a thorough grid independent study at n = 1.0. Fine and equal grid spacing is used close to the square cylinder to capture the upper and lower shear layers shed from the cylinder. Away from the cylinder the grid is unequal in size and stretched out in all direction. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition du/dy = 0, v = 0) at upper and lower domain boundary conditions are used for this simulation. Wall boundary (u = v = 0) is considered on the square cylinder surface. Fully conservative 2-D unsteady Navier-Stokes equations are discretized and then solved by Ansys Fluent 14.5 to understand the flow nature. SIMPLE algorithm written in finite volume method is selected for this purpose which is the default solver in scripted in Fluent. The result obtained for Newtonian fluid flow agrees well with previous work supporting Fluent’s usefulness in academic research. A minute analysis of instantaneous and time averaged flow field is obtained both for Newtonian and pseudoplastic fluid flow. It has been observed that drag coefficient increases continuously with the reduced value of n. Also, the vortex shedding phenomenon changes at n = 0.4 due to flow instability. These are some of the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: Ansys Fluent, CFD, periodic vortex shedding, pseudoplastic fluid flow

Procedia PDF Downloads 164
102 Training During Emergency Response to Build Resiliency in Water, Sanitation, and Hygiene

Authors: Lee Boudreau, Ash Kumar Khaitu, Laura A. S. MacDonald

Abstract:

In April 2015, a magnitude 7.8 earthquake struck Nepal, killing, injuring, and displacing thousands of people. The earthquake also damaged water and sanitation service networks, leading to a high risk of diarrheal disease and the associated negative health impacts. In response to the disaster, the Environment and Public Health Organization (ENPHO), a Kathmandu-based non-governmental organization, worked with the Centre for Affordable Water and Sanitation Technology (CAWST), a Canadian education, training and consulting organization, to develop two training programs to educate volunteers on water, sanitation, and hygiene (WASH) needs. The first training program was intended for acute response, with the second focusing on longer term recovery. A key focus was to equip the volunteers with the knowledge and skills to formulate useful WASH advice in the unanticipated circumstances they would encounter when working in affected areas. Within the first two weeks of the disaster, a two-day acute response training was developed, which focused on enabling volunteers to educate those affected by the disaster about local WASH issues, their link to health, and their increased importance immediately following emergency situations. Between March and October 2015, a total of 19 training events took place, with over 470 volunteers trained. The trained volunteers distributed hygiene kits and liquid chlorine for household water treatment. They also facilitated health messaging and WASH awareness activities in affected communities. A three-day recovery phase training was also developed and has been delivered to volunteers in Nepal since October 2015. This training focused on WASH issues during the recovery and reconstruction phases. The interventions and recommendations in the recovery phase training focus on long-term WASH solutions, and so form a link between emergency relief strategies and long-term development goals. ENPHO has trained 226 volunteers during the recovery phase, with training ongoing as of April 2016. In the aftermath of the earthquake, ENPHO found that its existing pool of volunteers were more than willing to help those in their communities who were more in need. By training these and new volunteers, ENPHO was able to reach many more communities in the immediate aftermath of the disaster; together they reached 11 of the 14 earthquake-affected districts. The collaboration between ENPHO and CAWST in developing the training materials was a highly collaborative and iterative process, which enabled the training materials to be developed within a short response time. By training volunteers on basic WASH topics during both the immediate response and the recovery phase, ENPHO and CAWST have been able to link immediate emergency relief to long-term developmental goals. While the recovery phase training continues in Nepal, CAWST is planning to decontextualize the training used in both phases so that it can be applied to other emergency situations in the future. The training materials will become part of the open content materials available on CAWST’s WASH Resources website.

Keywords: water and sanitation, emergency response, education and training, building resilience

Procedia PDF Downloads 286
101 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 324
100 Segmented Pupil Phasing with Deep Learning

Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan

Abstract:

Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.

Keywords: wavefront sensing, deep learning, deployable telescope, space telescope

Procedia PDF Downloads 80
99 Analyzing Electromagnetic and Geometric Characterization of Building Insulation Materials Using the Transient Radar Method (TRM)

Authors: Ali Pourkazemi

Abstract:

The transient radar method (TRM) is one of the non-destructive methods that was introduced by authors a few years ago. The transient radar method can be classified as a wave-based non destructive testing (NDT) method that can be used in a wide frequency range. Nevertheless, it requires a narrow band, ranging from a few GHz to a few THz, depending on the application. As a time-of-flight and real-time method, TRM can measure the electromagnetic properties of the sample under test not only quickly and accurately, but also blindly. This means that it requires no prior knowledge of the sample under test. For multi-layer structures, TRM is not only able to detect changes related to any parameter within the multi-layer structure but can also measure the electromagnetic properties of each layer and its thickness individually. Although the temperature, humidity, and general environmental conditions may affect the sample under test, they do not affect the accuracy of the Blind TRM algorithm. In this paper, the electromagnetic properties as well as the thickness of the individual building insulation materials - as a single-layer structure - are measured experimentally. Finally, the correlation between the reflection coefficients and some other technical parameters such as sound insulation, thermal resistance, thermal conductivity, compressive strength, and density is investigated. The sample to be studied is 30 cm x 50 cm and the thickness of the samples varies from a few millimeters to 6 centimeters. This experiment is performed with both biostatic and differential hardware at 10 GHz. Since it is a narrow-band system, high-speed computation for analysis, free-space application, and real-time sensor, it has a wide range of potential applications, e.g., in the construction industry, rubber industry, piping industry, wind energy industry, automotive industry, biotechnology, food industry, pharmaceuticals, etc. Detection of metallic, plastic pipes wires, etc. through or behind the walls are specific applications for the construction industry.

Keywords: transient radar method, blind electromagnetic geometrical parameter extraction technique, ultrafast nondestructive multilayer dielectric structure characterization, electronic measurement systems, illumination, data acquisition performance, submillimeter depth resolution, time-dependent reflected electromagnetic signal blind analysis method, EM signal blind analysis method, time domain reflectometer, microwave, milimeter wave frequencies

Procedia PDF Downloads 46
98 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 161
97 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction

Authors: Ahlad Neti, Elisa Arch, Martha Hall

Abstract:

Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.

Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices

Procedia PDF Downloads 130
96 A Case Study on the Development and Application of Media Literacy Education Program Based on Circular Learning

Authors: Kim Hyekyoung, Au Yunkyung

Abstract:

As media plays an increasingly important role in our lives, the age at which media usage begins is getting younger worldwide. Particularly, young children are exposed to media at an early age, making early childhood media literacy education an essential task. However, most existing early childhood media literacy education programs focus solely on teaching children how to use media, and practical implementation and application are challenging. Therefore, this study aims to develop a play-based early childhood media literacy education program utilizing topic-based media content and explore the potential application and impact of this program on young children's media literacy learning. Based on theoretical and literature review on media literacy education, analysis of existing educational programs, and a survey on the current status and teacher perceptions of media literacy education for preschool children, this study developed a media literacy education program for preschool children, considering the components of media literacy (understanding media characteristics, self-regulation, self-expression, critical understanding, ethical norms, and social communication). To verify the effectiveness of the program, 20 preschool children aged 5 from C City M Kindergarten were chosen as participants, and the program was implemented from March 28th to July 4th, 2022, once a week for a total of 7 sessions. The program was developed based on Gallenstain's (2003) iterative learning model (participation-exploration-explanation-extension-evaluation). To explore the quantitative changes before and after the program, a repeated measures analysis of variance was conducted, and qualitative analysis was employed to examine the observed process changes. It was found that after the application of the education program, media literacy levels such as understanding media characteristics, self-regulation, self-expression, critical understanding, ethical norms, and social communication significantly improved. The recursive learning-based early childhood media literacy education program developed in this study can be effectively applied to young children's media literacy education and help enhance their media literacy levels. In terms of observed process changes, it was confirmed that children learned about various topics, expressed their thoughts, and improved their ability to communicate with others using media content. These findings emphasize the importance of developing and implementing media literacy education programs and can contribute to empowering young children to safely and effectively utilize media in their media environment. The results of this study, exploring the potential application and impact of the recursive learning-based early childhood media literacy education program on young children's media literacy learning, demonstrated positive changes in young children's media literacy levels. These results go beyond teaching children how to use media and can help foster their ability to safely and effectively utilize media in their media environment. Additionally, to enhance young children's media literacy levels and create a safe media environment, diverse content and methodologies are needed, and the continuous development and evaluation of education programs should be conducted.

Keywords: young children, media literacy, recursive learning, education program

Procedia PDF Downloads 53
95 Effect of Bonded and Removable Retainers on Occlusal Settling after Orthodontic Treatment: A Systematic Review and Meta-Analysis

Authors: Umair Shoukat Ali, Kamil Zafar, Rashna Hoshang Sukhia, Mubassar Fida, Aqeel Ahmed

Abstract:

Objective: This systematic review and meta-analysis aimed to summarize the effectiveness of bonded and removable retainers (Hawley and Essix retainer) in terms of improvement in occlusal settling (occlusal contact points/areas) after orthodontic treatment. Search Method: We searched the Cochrane Library, CINAHL Plus, PubMed, Web of Science, Orthodontic journals, and Google scholar for eligible studies. We included randomized control trials (RCT) along with Cohort studies. Studies that reported occlusal contacts/areas during retention with fixed bonded and removable retainers were included. To assess the quality of the RCTs Cochrane risk of bias tool was utilized, whereas Newcastle-Ottawa Scale was used for assessing the quality of cohort studies. Data analysis: The data analysis was limited to reporting mean values of occlusal contact points/areas with different retention methods. By utilizing the RevMan software V.5.3, a meta-analysis was performed for all the studies with the quantitative data. For the computation of the summary effect, a random effect model was utilized in case of high heterogeneity. I2 statistics were utilized to assess the heterogeneity among the selected studies. Results: We included 6 articles in our systematic review after scrutinizing 219 articles and eliminating them based on duplication, titles, and objectives. We found significant differences between fixed and removable retainers in terms of occlusal settling within the included studies. Bonded retainer (BR) allowed faster and better posterior tooth settling as compared to Hawley retainer (HR). However, HR showed good occlusal settling in the anterior dental arch. Essix retainer showed a decrease in occlusal contact during the retention phase. Meta-analysis showed no statistically significant difference between BR and removable retainers. Conclusions: HR allowed better overall occlusal settling as compared to other retainers in comparison. However, BR allowed faster settling in the posterior teeth region. Overall, there are insufficient high-quality RCTs to provide additional evidence, and further high-quality RCTs research is needed.

Keywords: orthodontic retainers, occlusal contact, Hawley, fixed, vacuum-formed

Procedia PDF Downloads 98
94 Analytical and Numerical Studies on the Behavior of a Freezing Soil Layer

Authors: X. Li, Y. Liu, H. Wong, B. Pardoen, A. Fabbri, F. McGregor, E. Liu

Abstract:

The target of this paper is to investigate how saturated poroelastic soils subject to freezing temperatures behave and how different boundary conditions can intervene and affect the thermo-hydro-mechanical (THM) responses, based on a particular but classical configuration of a finite homogeneous soil layer studied by Terzaghi. The essential relations on the constitutive behavior of a freezing soil are firstly recalled: ice crystal - liquid water thermodynamic equilibrium, hydromechanical constitutive equations, momentum balance, water mass balance, and the thermal diffusion equation, in general, non-linear case where material parameters are state-dependent. The system of equations is firstly linearized, assuming all material parameters to be constants, particularly the permeability of liquid water, which should depend on the ice content. Two analytical solutions solved by the classic Laplace transform are then developed, accounting for two different sets of boundary conditions. Afterward, the general non-linear equations with state-dependent parameters are solved using a commercial code COMSOL based on finite elements method to obtain numerical results. The validity of this numerical modeling is partially verified using the analytical solution in the limiting case of state-independent parameters. Comparison between the results given by the linearized analytical solutions and the non-linear numerical model reveals that the above-mentioned linear computation will always underestimate the liquid pore pressure and displacement, whatever the hydraulic boundary conditions are. In the nonlinear model, the faster growth of ice crystals, accompanying the subsequent reduction of permeability of freezing soil layer, makes a longer duration for the depressurization of water liquid and slower settlement in the case where the ground surface is swiftly covered by a thin layer of ice, as well as a bigger global liquid pressure and swelling in the case of the impermeable ground surface. Nonetheless, the analytical solutions based on linearized equations give a correct order-of-magnitude estimate, especially at moderate temperature variations, and remain a useful tool for preliminary design checks.

Keywords: chemical potential, cryosuction, Laplace transform, multiphysics coupling, phase transformation, thermodynamic equilibrium

Procedia PDF Downloads 56
93 Ecological Relationships Between Material, Colonizing Organisms, and Resulting Performances

Authors: Chris Thurlbourne

Abstract:

Due to the continual demand for material to build, and a limit of good environmental material credentials of 'normal' building materials, there is a need to look at new and reconditioned material types - both biogenic and non-biogenic - and a field of research that accompanies this. This research development focuses on biogenic and non-biogenic material engineering and the impact of our environment on new and reconditioned material types. In our building industry and all the industries involved in constructing our built environment, building material types can be broadly categorized into two types, biogenic and non-biogenic material properties. Both play significant roles in shaping our built environment. Regardless of their properties, all material types originate from our earth, whereas many are modified through processing to provide resistance to 'forces of nature', be it rain, wind, sun, gravity, or whatever the local environmental conditions throw at us. Modifications are succumbed to offer benefits in endurance, resistance, malleability in handling (building with), and ergonomic values - in all types of building material. We assume control of all building materials through rigorous quality control specifications and regulations to ensure materials perform under specific constraints. Yet materials confront an external environment that is not controlled with live forces undetermined, and of which materials naturally act and react through weathering, patination and discoloring, promoting natural chemical reactions such as rusting. The purpose of the paper is to present recent research that explores the after-life of specific new and reconditioned biogenic and non-biogenic material types and how the understanding of materials' natural processes of transformation when exposed to the external climate, can inform initial design decisions. With qualities to receive in a transient and contingent manner, ecological relationships between material, the colonizing organisms and resulting performances invite opportunities for new design explorations for the benefit of both the needs of human society and the needs of our natural environment. The research follows designing for the benefit of both and engaging in both biogenic and non-biogenic material engineering whilst embracing the continual demand for colonization - human and environment, and the aptitude of a material to be colonized by one or several groups of living organisms without necessarily undergoing any severe deterioration, but embracing weathering, patination and discoloring, and at the same time establishing new habitat. The research follows iterative prototyping processes where knowledge has been accumulated via explorations of specific material performances, from laboratory to construction mock-ups focusing on the architectural qualities embedded in control of production techniques and facilitating longer-term patinas of material surfaces to extend the aesthetic beyond common judgments. Experiments are therefore focused on how the inherent material qualities drive a design brief toward specific investigations to explore aesthetics induced through production, patinas and colonization obtained over time while exposed and interactions with external climate conditions.

Keywords: biogenic and non-biogenic, natural processes of transformation, colonization, patina

Procedia PDF Downloads 63
92 A Framework of Virtualized Software Controller for Smart Manufacturing

Authors: Pin Xiu Chen, Shang Liang Chen

Abstract:

A virtualized software controller is developed in this research to replace traditional hardware control units. This virtualized software controller transfers motion interpolation calculations from the motion control units of end devices to edge computing platforms, thereby reducing the end devices' computational load and hardware requirements and making maintenance and updates easier. The study also applies the concept of microservices, dividing the control system into several small functional modules and then deploy into a cloud data server. This reduces the interdependency among modules and enhances the overall system's flexibility and scalability. Finally, with containerization technology, the system can be deployed and started in a matter of seconds, which is more efficient than traditional virtual machine deployment methods. Furthermore, this virtualized software controller communicates with end control devices via wireless networks, making the placement of production equipment or the redesign of processes more flexible and no longer limited by physical wiring. To handle the large data flow and maintain low-latency transmission, this study integrates 5G technology, fully utilizing its high speed, wide bandwidth, and low latency features to achieve rapid and stable remote machine control. An experimental setup is designed to verify the feasibility and test the performance of this framework. This study designs a smart manufacturing site with a 5G communication architecture, serving as a field for experimental data collection and performance testing. The smart manufacturing site includes one robotic arm, three Computer Numerical Control machine tools, several Input/Output ports, and an edge computing architecture. All machinery information is uploaded to edge computing servers and cloud servers via 5G communication and the Internet of Things framework. After analysis and computation, this information is converted into motion control commands, which are transmitted back to the relevant machinery for motion control through 5G communication. The communication time intervals at each stage are calculated using the C++ chrono library to measure the time difference for each command transmission. The relevant test results will be organized and displayed in the full-text.

Keywords: 5G, MEC, microservices, virtualized software controller, smart manufacturing

Procedia PDF Downloads 45
91 Economic Evaluation of Degradation by Corrosion of an On-Grid Battery Energy Storage System: A Case Study in Algeria Territory

Authors: Fouzia Brihmat

Abstract:

Economic planning models, which are used to build microgrids and distributed energy resources, are the current norm for expressing such confidence (DER). These models often decide both short-term DER dispatch and long-term DER investments. This research investigates the most cost-effective hybrid (photovoltaic-diesel) renewable energy system (HRES) based on Total Net Present Cost (TNPC) in an Algerian Saharan area, which has a high potential for solar irradiation and has a production capacity of 1GW/h. Lead-acid batteries have been around much longer and are easier to understand, but have limited storage capacity. Lithium-ion batteries last longer, are lighter, but generally more expensive. By combining the advantages of each chemistry, we produce cost-effective high-capacity battery banks that operate solely on AC coupling. The financial implications of this research describe the corrosion process that occurs at the interface between the active material and grid material of the positive plate of a lead-acid battery. The best cost study for the HRES is completed with the assistance of the HOMER Pro MATLAB Link. Additionally, during the course of the project's 20 years, the system is simulated for each time step. In this model, which takes into consideration decline in solar efficiency, changes in battery storage levels over time, and rises in fuel prices above the rate of inflation. The trade-off is that the model is more accurate, but it took longer to compute. As a consequence, the model is more precise, but the computation takes longer. We initially utilized the Optimizer to run the model without MultiYear in order to discover the best system architecture. The optimal system for the single-year scenario is the Danvest generator, which has 760 kW, 200 kWh of the necessary quantity of lead-acid storage, and a somewhat lower COE of $0.309/kWh. Different scenarios that account for fluctuations in the gasified biomass generator's production of electricity have been simulated, and various strategies to guarantee the balance between generation and consumption have been investigated. The technological optimization of the same system has been finished and is being reviewed in a recent paper study.

Keywords: battery, corrosion, diesel, economic planning optimization, hybrid energy system, lead-acid battery, multi-year planning, microgrid, price forecast, PV, total net present cost

Procedia PDF Downloads 67
90 Computational Fluid Dynamics Design and Analysis of Aerodynamic Drag Reduction Devices for a Mazda T3500 Truck

Authors: Basil Nkosilathi Dube, Wilson R. Nyemba, Panashe Mandevu

Abstract:

In highway driving, over 50 percent of the power produced by the engine is used to overcome aerodynamic drag, which is a force that opposes a body’s motion through the air. Aerodynamic drag and thus fuel consumption increase rapidly at speeds above 90kph. It is desirable to minimize fuel consumption. Aerodynamic drag reduction in highway driving is the best approach to minimize fuel consumption and to reduce the negative impacts of greenhouse gas emissions on the natural environment. Fuel economy is the ultimate concern of automotive development. This study aims to design and analyze drag-reducing devices for a Mazda T3500 truck, namely, the cab roof and rear (trailer tail) fairings. The aerodynamic effects of adding these append devices were subsequently investigated. To accomplish this, two 3D CAD models of the Mazda truck were designed using the Design Modeler. One, with these, append devices and the other without. The models were exported to ANSYS Fluent for computational fluid dynamics analysis, no wind tunnel tests were performed. A fine mesh with more than 10 million cells was applied in the discretization of the models. The realizable k-ε turbulence model with enhanced wall treatment was used to solve the Reynold’s Averaged Navier-Stokes (RANS) equation. In order to simulate the highway driving conditions, the tests were simulated with a speed of 100 km/h. The effects of these devices were also investigated for low-speed driving. The drag coefficients for both models were obtained from the numerical calculations. By adding the cab roof and rear (trailer tail) fairings, the simulations show a significant reduction in aerodynamic drag at a higher speed. The results show that the greatest drag reduction is obtained when both devices are used. Visuals from post-processing show that the rear fairing minimized the low-pressure region at the rear of the trailer when moving at highway speed. The rear fairing achieved this by streamlining the turbulent airflow, thereby delaying airflow separation. For lower speeds, there were no significant differences in drag coefficients for both models (original and modified). The results show that these devices can be adopted for improving the aerodynamic efficiency of the Mazda T3500 truck at highway speeds.

Keywords: aerodynamic drag, computation fluid dynamics, fluent, fuel consumption

Procedia PDF Downloads 120
89 Bridging the Educational Gap: A Curriculum Framework for Mass Timber Construction Education and Comparative Analysis of Physical vs. Virtual Prototypes in Construction Management

Authors: Farnaz Jafari

Abstract:

The surge in mass timber construction represents a pivotal moment in sustainable building practices, yet the lack of comprehensive education in construction management poses a challenge in harnessing this innovation effectively. This research endeavors to bridge this gap by developing a curriculum framework integrating mass timber construction into undergraduate and industry certificate programs. To optimize learning outcomes, the study explores the impact of two prototype formats -Virtual Reality (VR) simulations and physical mock-ups- on students' understanding and skill development. The curriculum framework aims to equip future construction managers with a holistic understanding of mass timber, covering its unique properties, construction methods, building codes, and sustainable advantages. The study adopts a mixed-methods approach, commencing with a systematic literature review and leveraging surveys and interviews with educators and industry professionals to identify existing educational gaps. The iterative development process involves incorporating stakeholder feedback into the curriculum. The evaluation of prototype impact employs pre- and post-tests administered to participants engaged in pilot programs. Through qualitative content analysis and quantitative statistical methods, the study seeks to compare the effectiveness of VR simulations and physical mock-ups in conveying knowledge and skills related to mass timber construction. The anticipated findings will illuminate the strengths and weaknesses of each approach, providing insights for future curriculum development. The curriculum's expected contribution to sustainable construction education lies in its emphasis on practical application, bridging the gap between theoretical knowledge and hands-on skills. The research also seeks to establish a standard for mass timber construction education, contributing to the field through a unique comparative analysis of VR simulations and physical mock-ups. The study's significance extends to the development of best practices and evidence-based recommendations for integrating technology and hands-on experiences in construction education. By addressing current educational gaps and offering a comparative analysis, this research aims to enrich the construction management education experience and pave the way for broader adoption of sustainable practices in the industry. The envisioned curriculum framework is designed for versatile integration, catering to undergraduate programs and industry training modules, thereby enhancing the educational landscape for aspiring construction professionals. Ultimately, this study underscores the importance of proactive educational strategies in preparing industry professionals for the evolving demands of the construction landscape, facilitating a seamless transition towards sustainable building practices.

Keywords: curriculum framework, mass timber construction, physical vs. virtual prototypes, sustainable building practices

Procedia PDF Downloads 43
88 A Variational Reformulation for the Thermomechanically Coupled Behavior of Shape Memory Alloys

Authors: Elisa Boatti, Ulisse Stefanelli, Alessandro Reali, Ferdinando Auricchio

Abstract:

Thanks to their unusual properties, shape memory alloys (SMAs) are good candidates for advanced applications in a wide range of engineering fields, such as automotive, robotics, civil, biomedical, aerospace. In the last decades, the ever-growing interest for such materials has boosted several research studies aimed at modeling their complex nonlinear behavior in an effective and robust way. Since the constitutive response of SMAs is strongly thermomechanically coupled, the investigation of the non-isothermal evolution of the material must be taken into consideration. The present study considers an existing three-dimensional phenomenological model for SMAs, able to reproduce the main SMA properties while maintaining a simple user-friendly structure, and proposes a variational reformulation of the full non-isothermal version of the model. While the considered model has been thoroughly assessed in an isothermal setting, the proposed formulation allows to take into account the full nonisothermal problem. In particular, the reformulation is inspired to the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) formalism, and is based on a generalized gradient flow of the total entropy, related to thermal and mechanical variables. Such phrasing of the model is new and allows for a discussion of the model from both a theoretical and a numerical point of view. Moreover, it directly implies the dissipativity of the flow. A semi-implicit time-discrete scheme is also presented for the fully coupled thermomechanical system, and is proven unconditionally stable and convergent. The correspondent algorithm is then implemented, under a space-homogeneous temperature field assumption, and tested under different conditions. The core of the algorithm is composed of a mechanical subproblem and a thermal subproblem. The iterative scheme is solved by a generalized Newton method. Numerous uniaxial and biaxial tests are reported to assess the performance of the model and algorithm, including variable imposed strain, strain rate, heat exchange properties, and external temperature. In particular, the heat exchange with the environment is the only source of rate-dependency in the model. The reported curves clearly display the interdependence between phase transformation strain and material temperature. The full thermomechanical coupling allows to reproduce the exothermic and endothermic effects during respectively forward and backward phase transformation. The numerical tests have thus demonstrated that the model can appropriately reproduce the coupled SMA behavior in different loading conditions and rates. Moreover, the algorithm has proved effective and robust. Further developments are being considered, such as the extension of the formulation to the finite-strain setting and the study of the boundary value problem.

Keywords: generalized gradient flow, GENERIC formalism, shape memory alloys, thermomechanical coupling

Procedia PDF Downloads 200
87 Stakeholder-Driven Development of a One Health Platform to Prevent Non-Alimentary Zoonoses

Authors: A. F. G. Van Woezik, L. M. A. Braakman-Jansen, O. A. Kulyk, J. E. W. C. Van Gemert-Pijnen

Abstract:

Background: Zoonoses pose a serious threat to public health and economies worldwide, especially as antimicrobial resistance grows and newly emerging zoonoses can cause unpredictable outbreaks. In order to prevent and control emerging and re-emerging zoonoses, collaboration between veterinary, human health and public health domains is essential. In reality however, there is a lack of cooperation between these three disciplines and uncertainties exist about their tasks and responsibilities. The objective of this ongoing research project (ZonMw funded, 2014-2018) is to develop an online education and communication One Health platform, “eZoon”, for the general public and professionals working in veterinary, human health and public health domains to support the risk communication of non-alimentary zoonoses in the Netherlands. The main focus is on education and communication in times of outbreak as well as in daily non-outbreak situations. Methods: A participatory development approach was used in which stakeholders from veterinary, human health and public health domains participated. Key stakeholders were identified using business modeling techniques previously used for the design and implementation of antibiotic stewardship interventions and consisted of a literature scan, expert recommendations, and snowball sampling. We used a stakeholder salience approach to rank stakeholders according to their power, legitimacy, and urgency. Semi-structured interviews were conducted with stakeholders (N=20) from all three disciplines to identify current problems in risk communication and stakeholder values for the One Health platform. Interviews were transcribed verbatim and coded inductively by two researchers. Results: The following key values were identified (but were not limited to): (a) need for improved awareness of veterinary and human health of each other’s fields, (b) information exchange between veterinary and human health, in particularly at a regional level; (c) legal regulations need to match with daily practice; (d) professionals and general public need to be addressed separately using tailored language and information; (e) information needs to be of value to professionals (relevant, important, accurate, and have financial or other important consequences if ignored) in order to be picked up; and (f) need for accurate information from trustworthy, centrally organised sources to inform the general public. Conclusion: By applying a participatory development approach, we gained insights from multiple perspectives into the main problems of current risk communication strategies in the Netherlands and stakeholder values. Next, we will continue the iterative development of the One Health platform by presenting key values to stakeholders for validation and ranking, which will guide further development. We will develop a communication platform with a serious game in which professionals at the regional level will be trained in shared decision making in time-critical outbreak situations, a smart Question & Answer (Q&A) system for the general public tailored towards different user profiles, and social media to inform the general public adequately during outbreaks.

Keywords: ehealth, one health, risk communication, stakeholder, zoonosis

Procedia PDF Downloads 260
86 Design of the Intelligent Virtual Learning Coach. A Contextual Learning Approach to Digital Literacy of Senior Learners in the Context of Electronic Health Record (EHR)

Authors: Ilona Buchem, Carolin Gellner

Abstract:

The call for the support of senior learners in the development of digital literacy has become prevalent in recent years, especially in view of the aging societies paired with advances in digitalization in all spheres of life, including e-health. The goal has been to create opportunities for learning that incorporate the use of context in a reflective and dialogical way. Contextual learning has focused on developing skills through the application of authentic problems. While major research efforts in supporting senior learners in developing digital literacy have been invested so far in e-learning, focusing on knowledge acquisition and cognitive tasks, little research exists in reflective mentoring and coaching with the help of pedagogical agents and addressing the contextual dimensions of learning. This paper describes an approach to creating opportunities for senior learners to improve their digital literacy in the authentic context of the electronic health record (EHR) with the support of an intelligent virtual learning coach. The paper focuses on the design of the virtual coach as part of an e-learning system, which was developed in the EPA-Coach project founded by the German Ministry of Education and Research. The paper starts with the theoretical underpinnings of contextual learning and the related design considerations for a virtual learning coach based on previous studies. Since previous research in the area was mostly designed to cater to the needs of younger audiences, the results had to be adapted to the specific needs of senior learners. Next, the paper outlines the stages in the design of the virtual coach, which included the adaptation of the design requirements, the iterative development of the prototypes, the results of the two evaluation studies and how these results were used to improve the design of the virtual coach. The paper then presents the four prototypes of a senior-friendly virtual learning coach, which were designed to represent different preferences related to the visual appearance, the communication and social interaction styles, and the pedagogical roles. The first evaluation of the virtual coach design was an exploratory, qualitative study, which was carried out in October 2020 with eight seniors aged 64 to 78 and included a range of questions about the preferences of senior learners related to the visual design, gender, age, communication and role. Based on the results of the first evaluation, the design was adapted to the preferences of the senior learners and the new versions of prototypes were created to represent two male and two female options of the virtual coach. The second evaluation followed a quantitative approach with an online questionnaire and was conducted in May 2021 with 41 seniors aged 66 to 93 years. Following three research questions, the survey asked about (1) the intention to use, (2) the perceived characteristics, and (3) the preferred communication/interaction style of the virtual coach, i. e. task-oriented, relationship-oriented, or a mix. This paper follows with the discussion of the results of the design process and ends with conclusions and next steps in the development of the virtual coach including recommendations for further research.

Keywords: virtual learning coach, virtual mentor, pedagogical agent, senior learners, digital literacy, electronic health records

Procedia PDF Downloads 154
85 Double Wishbone Pushrod Suspension Systems Co-Simulation for Racing Applications

Authors: Suleyman Ogul Ertugrul, Mustafa Turgut, Serkan Inandı, Mustafa Gorkem Coban, Mustafa Kıgılı, Ali Mert, Oguzhan Kesmez, Murat Ozancı, Caglar Uyulan

Abstract:

In high-performance automotive engineering, the realistic simulation of suspension systems is crucial for enhancing vehicle dynamics and handling. This study focuses on the double wishbone suspension system, prevalent in racing vehicles due to its superior control and stability characteristics. Utilizing MATLAB and Adams Car simulation software, we conduct a comprehensive analysis of displacement behaviors and damper sizing under various dynamic conditions. The initial phase involves using MATLAB to simulate the entire suspension system, allowing for the preliminary determination of damper size based on the system's response under simulated conditions. Following this, manual calculations of wheel loads are performed to assess the forces acting on the front and rear suspensions during scenarios such as braking, cornering, maximum vertical loads, and acceleration. Further dynamic force analysis is carried out using MATLAB Simulink, focusing on the interactions between suspension components during key movements such as bumps and rebounds. This simulation helps in formulating precise force equations and in calculating the stiffness of the suspension springs. To enhance the accuracy of our findings, we focus on a detailed kinematic and dynamic analysis. This includes the creation of kinematic loops, derivation of relevant equations, and computation of Jacobian matrices to accurately determine damper travel and compression metrics. The calculated spring stiffness is crucial in selecting appropriate springs to ensure optimal suspension performance. To validate and refine our results, we replicate the analyses using the Adams Car software, renowned for its detailed handling of vehicular dynamics. The goal is to achieve a robust, reliable suspension setup that maximizes performance under the extreme conditions encountered in racing scenarios. This study exemplifies the integration of theoretical mechanics with advanced simulation tools to achieve a high-performance suspension setup that can significantly improve race car performance, providing a methodology that can be adapted for different types of racing vehicles.

Keywords: FSAE, suspension system, Adams Car, kinematic

Procedia PDF Downloads 25