Search results for: simulation techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11024

Search results for: simulation techniques

8114 Influence of the Cooking Technique on the Iodine Content of Frozen Hake

Authors: F. Deng, R. Sanchez, A. Beltran, S. Maestre

Abstract:

The high nutritional value associated with seafood is related to the presence of essential trace elements. Moreover, seafood is considered an important source of energy, proteins, and long-chain polyunsaturated fatty acids. Generally, seafood is consumed cooked. Consequently, the nutritional value could be degraded. Seafood, such as fish, shellfish, and seaweed, could be considered as one of the main iodine sources. The deficient or excessive consumption of iodine could cause dysfunction and pathologies related to the thyroid gland. The main objective of this work is to evaluated iodine stability in hake (Merluccius) undergone different culinary techniques. The culinary process considered were: boiling, steaming, microwave cooking, baking, cooking en papillote (twisted cover with the shape of a sweet wrapper) and coating with a batter of flour and deep-frying. The determination of iodine was carried by Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Regarding sample handling strategies, liquid-liquid extraction has demonstrated to be a powerful pre-concentration and clean-up approach for trace metal analysis by ICP techniques. Extraction with tetramethylammonium hydroxide (TMAH reagent) was used as a sample preparation method in this work. Based on the results, it can be concluded that the stability of iodine was degraded with the cooking processes. The major degradation was observed for the boiling and microwave cooking processes. The content of iodine in hake decreased up to 60% and 52%, respectively. However, if the boiling cooking liquid is preserved, this loss that has been generated during cooking is reduced. Only when the fish was cooked by following the cooking en papillote process the iodine content was preserved.

Keywords: cooking process, ICP-MS, iodine, hake

Procedia PDF Downloads 129
8113 Exploring the ‘Many Worlds’ Interpretation in Both a Philosophical and Creative Literary Framework

Authors: Jane Larkin

Abstract:

Combining elements of philosophy, science, and creative writing, this investigation explores how a philosophically structured science-fiction novel can challenge the theory of linearity and singularity of time through the ‘many worlds’ theory. This concept is addressed through the creation of a research exegesis and accompanying creative artefact, designed to be read in conjunction with each other in an explorative, interwoven manner. Research undertaken into scientific concepts, such as the ‘many worlds’ interpretation of quantum mechanics and diverse philosophers and their ideologies on time, is embodied in an original science-fiction narrative titled, It Goes On. The five frames that make up the creative artefact are enhanced not only by five leading philosophers and their philosophies on time but by an appreciation of the research, which comes first in the paper. Research into traditional approaches to storytelling is creatively and innovatively inverted in several ways, thus challenging the singularity and linearity of time. Further nonconventional approaches to literary techniques include an abstract narrator, embodied by time, a concept, and a figure in the text, whose voice and vantage point in relation to death furthers the unreliability of the notion of time. These further challenge individuals’ understanding of complex scientific and philosophical views in a variety of ways. The science-fiction genre is essential when considering the speculative nature of It Goes On, which deals with parallel realities and is a fantastical exploration of human ingenuity in plausible futures. Therefore, this paper documents the research-led methodology used to create It Goes On, the application of the ‘many worlds’ theory within a framed narrative, and the many innovative techniques used to contribute new knowledge in a variety of fields.

Keywords: time, many-worlds theory, Heideggerian philosophy, framed narrative

Procedia PDF Downloads 67
8112 Expert System for Road Bridge Constructions

Authors: Michael Dimmer, Holger Flederer

Abstract:

The basis of realizing a construction project is a technically flawless concept which satisfies conditions regarding environment and costs, as well as static-constructional terms. The presented software system actively supports civil engineers during the setup of optimal designs, by giving advice regarding durability, life-cycle costs, sustainability and much more. A major part of the surrounding conditions of a design process is gathered and assimilated by experienced engineers subconsciously. It is a question about eligible building techniques and their practicability by considering emerging costs. Planning engineers have acquired many of this experience during their professional life and use them for their daily work. Occasionally, the planning engineer should disassociate himself from his experience to be open for new and better solutions which meet the functional demands, as well. The developed expert system gives planning engineers recommendations for preferred design options of new constructions as well as for existing bridge constructions. It is possible to analyze construction elements and techniques regarding sustainability and life-cycle costs. This way the software provides recommendations for future constructions. Furthermore, there is an option to design existing road bridges especially for heavy duty transport. This implies a route planning tool to get quick and reliable information as to whether the bridge support structures of a transport route have been measured sufficiently for a certain heavy duty transport. The use of this expert system in bridge planning companies and building authorities will save costs massively for new and existent bridge constructions. This is achieved by consequently considering parameters like life-cycle costs and sustainability for its planning recommendations.

Keywords: expert system, planning process, road bridges, software system

Procedia PDF Downloads 266
8111 Adsorption Behavior and Mechanism of Illite Surface under the Action of Different Surfactants

Authors: Xiuxia Sun, Yan Jin, Zilong Liu, Shiming Wei

Abstract:

As a critical mineral component of shale, illite is essential in oil exploration and development due to its surface hydration characteristics and action mechanism. This paper, starting from the perspective of the molecular structure of organic matter, uses molecular dynamics simulation technology to deeply explore the interaction mechanism between organic molecules and the illite surface. In the study, we thoroughly considered the forces such as van der Waals force, electrostatic force, and steric hindrance and constructed an illite crystal model covering C8-C18 modifiers. Subsequently, we systematically analyzed surfactants' adsorption behavior and hydration characteristics with different alkyl chain numbers, lengths, and concentrations on the illite surface. The simulation results show that surfactant molecules with shorter alkyl chains present a lateral monolayer or inclined double-layer arrangement on the illite surface, and these two arrangements may coexist under different concentration conditions. In addition, with the increase in the number of alkyl chains, the interlayer spacing of illite increases significantly. In contrast, the change in alkyl chain length has a limited effect on surface properties. It is worth noting that the change in functional group structure has a particularly significant effect on the wettability of the illite surface, and its influence even exceeds the change in the alkyl chain structure. This discovery gives us a new perspective on understanding and regulating the wetting properties. The results obtained are consistent with the XRD analysis and wettability experimental data in this paper, further confirming the reliability of the research conclusions. This study deepened our understanding of illite's hydration characteristics and mechanism. We provided new ideas and directions for the molecular design and application development of oilfield chemicals.

Keywords: illite, surfactant, hydration, wettability, adsorption

Procedia PDF Downloads 26
8110 The Role of Vibro-Stone Column for Enhancing the Soft Soil Properties

Authors: Mohsen Ramezan Shirazi, Orod Zarrin, Komeil Valipourian

Abstract:

This study investigated the behavior of improved soft soils through the vibro replacement technique by considering their settlements and consolidation rates and the applicability of this technique in various types of soils and settlement and bearing capacity calculations.

Keywords: bearing capacity, expansive clay, stone columns, vibro techniques

Procedia PDF Downloads 573
8109 Synthesis of MIPs towards Precursors and Intermediates of Illicit Drugs and Their following Application in Sensing Unit

Authors: K. Graniczkowska, N. Beloglazova, S. De Saeger

Abstract:

The threat of synthetic drugs is one of the most significant current drug problems worldwide. The use of drugs of abuse has increased dramatically during the past three decades. Among others, Amphetamine-Type Stimulants (ATS) are globally the second most widely used drugs after cannabis, exceeding the use of cocaine and heroin. ATS are potent central nervous system (CNS) stimulants, capable of inducing euphoric static similar to cocaine. Recreational use of ATS is widespread, even though warnings of irreversible damage of the CNS were reported. ATS pose a big problem and their production contributes to the pollution of the environment by discharging big volumes of liquid waste to sewage system. Therefore, there is a demand to develop robust and sensitive sensors that can detect ATS and their intermediates in environmental water samples. A rapid and simple test is required. Analysis of environmental water samples (which sometimes can be a harsh environment) using antibody-based tests cannot be applied. Therefore, molecular imprinted polymers (MIPs), which are known as synthetic antibodies, have been chosen for that approach. MIPs are characterized with a high mechanical and thermal stability, show chemical resistance in a broad pH range and various organic or aqueous solvents. These properties make them the preferred type of receptors for application in the harsh conditions imposed by environmental samples. To the best of our knowledge, there are no existing MIPs-based sensors toward amphetamine and its intermediates. Also not many commercial MIPs for this application are available. Therefore, the aim of this study was to compare different techniques to obtain MIPs with high specificity towards ATS and characterize them for following use in a sensing unit. MIPs against amphetamine and its intermediates were synthesized using a few different techniques, such as electro-, thermo- and UV-initiated polymerization. Different monomers, cross linkers and initiators, in various ratios, were tested to obtain the best sensitivity and polymers properties. Subsequently, specificity and selectivity were compared with commercially available MIPs against amphetamine. Different linkers, such as lipoic acid, 3-mercaptopioponic acid and tyramine were examined, in combination with several immobilization techniques, to select the best procedure for attaching particles on sensor surface. Performed experiments allowed choosing an optimal method for the intended sensor application. Stability of MIPs in extreme conditions, such as highly acidic or basic was determined. Obtained results led to the conclusion about MIPs based sensor applicability in sewage system testing.

Keywords: amphetamine type stimulants, environment, molecular imprinted polymers, MIPs, sensor

Procedia PDF Downloads 240
8108 Optimal Design of Tuned Inerter Damper-Based System for the Control of Wind-Induced Vibration in Tall Buildings through Cultural Algorithm

Authors: Luis Lara-Valencia, Mateo Ramirez-Acevedo, Daniel Caicedo, Jose Brito, Yosef Farbiarz

Abstract:

Controlling wind-induced vibrations as well as aerodynamic forces, is an essential part of the structural design of tall buildings in order to guarantee the serviceability limit state of the structure. This paper presents a numerical investigation on the optimal design parameters of a Tuned Inerter Damper (TID) based system for the control of wind-induced vibration in tall buildings. The control system is based on the conventional TID, with the main difference that its location is changed from the ground level to the last two story-levels of the structural system. The TID tuning procedure is based on an evolutionary cultural algorithm in which the optimum design variables defined as the frequency and damping ratios were searched according to the optimization criteria of minimizing the root mean square (RMS) response of displacements at the nth story of the structure. A Monte Carlo simulation was used to represent the dynamic action of the wind in the time domain in which a time-series derived from the Davenport spectrum using eleven harmonic functions with randomly chosen phase angles was reproduced. The above-mentioned methodology was applied on a case-study derived from a 37-story prestressed concrete building with 144 m height, in which the wind action overcomes the seismic action. The results showed that the optimally tuned TID is effective to reduce the RMS response of displacements up to 25%, which demonstrates the feasibility of the system for the control of wind-induced vibrations in tall buildings.

Keywords: evolutionary cultural algorithm, Monte Carlo simulation, tuned inerter damper, wind-induced vibrations

Procedia PDF Downloads 124
8107 The Impact of Green Building Envelopes on the Urban Microclimate of the Urban Canopy-Case Study: Fawzy Moaz Street, Alexandria, Egypt

Authors: Amany Haridy, Ahmed Elseragy, Fahd Omar

Abstract:

The issue of temperature increase in the urban microclimate has been at the center of attention recently, especially in dense urban areas, such as the City of Alexandria in Egypt, where building surfaces have become the dominant element (more than green areas and streets). Temperatures have been rising during daytime as well as nighttime, however, the research focused on the rise of air temperature at night, a phenomenon known as the urban heat island. This phenomenon has many effects on ecological life, as well as human health. This study provided evidence of the possibility of reducing the urban heat island by using a green building envelope (green wall and green roof) in Alexandria, Egypt. This City has witnessed a boom in growth in its urban fabric and population. A simulation analysis using the Envi-met software to find the ratio of air temperature reduction was performed. The simulation depended on the orientation of the green areas and their density, which was defined through a process of climatic analysis made by the Diva plugin using the Grasshopper software. Results showed that the reduction in air temperature varies from 0.8–2.0 °C, increasing with the increasing density of green areas. Many systems of green wall and green roof can be found in the local market. However, treating an existing building requires a careful choice of system to fit the building construction load and the surrounding nature. Among the systems of choice, there was the ‘geometric system’ of vertical greening that can be fixed on a light aluminum structure for walls and the extensive green system for roofs. Finally, native plants were the best choice in the long term because they fare well in the local climate.

Keywords: envi-met, green building envelope, urban heat island, urban microclimate

Procedia PDF Downloads 189
8106 A Framework for Teaching the Intracranial Pressure Measurement through an Experimental Model

Authors: Christina Klippel, Lucia Pezzi, Silvio Neto, Rafael Bertani, Priscila Mendes, Flavio Machado, Aline Szeliga, Maria Cosendey, Adilson Mariz, Raquel Santos, Lys Bendett, Pedro Velasco, Thalita Rolleigh, Bruna Bellote, Daria Coelho, Bruna Martins, Julia Almeida, Juliana Cerqueira

Abstract:

This project presents a framework for teaching intracranial pressure monitoring (ICP) concepts using a low-cost experimental model in a neurointensive care education program. Data concerning ICP monitoring contribute to the patient's clinical assessment and may dictate the course of action of a health team (nursing, medical staff) and influence decisions to determine the appropriate intervention. This study aims to present a safe method for teaching ICP monitoring to medical students in a Simulation Center. Methodology: Medical school teachers, along with students from the 4th year, built an experimental model for teaching ICP measurement. The model consists of a mannequin's head with a plastic bag inside simulating the cerebral ventricle and an inserted ventricular catheter connected to the ICP monitoring system. The bag simulating the ventricle can also be changed for others containing bloody or infected simulated cerebrospinal fluid. On the mannequin's ear, there is a blue point indicating the right place to set the "zero point" for accurate pressure reading. The educational program includes four steps: 1st - Students receive a script on ICP measurement for reading before training; 2nd - Students watch a video about the subject created in the Simulation Center demonstrating each step of the ICP monitoring and the proper care, such as: correct positioning of the patient, anatomical structures to establish the zero point for ICP measurement and a secure range of ICP; 3rd - Students train the procedure in the model. Teachers help students during training; 4th - Student assessment based on a checklist form. Feedback and correction of wrong actions. Results: Students expressed interest in learning ICP monitoring. Tests concerning the hit rate are still being performed. ICP's final results and video will be shown at the event. Conclusion: The study of intracranial pressure measurement based on an experimental model consists of an effective and controlled method of learning and research, more appropriate for teaching neurointensive care practices. Assessment based on a checklist form helps teachers keep track of student learning progress. This project offers medical students a safe method to develop intensive neurological monitoring skills for clinical assessment of patients with neurological disorders.

Keywords: neurology, intracranial pressure, medical education, simulation

Procedia PDF Downloads 154
8105 Design and Simulation a Low Phase Noise CMOS LC VCO for IEEE802.11a WLAN Applications

Authors: Hooman Kaabi, Raziyeh Karkoub

Abstract:

This work proposes a structure of AMOS-varactors. A 5GHz LC-VCO designed in TSMC 0.18μm CMOS to improve phase noise and tuning range performance. The tuning range is from 5.05GHZ to 5.88GHz.The phase noise is -154.9dBc/Hz at 1MHz offset from the carrier. It meets the requirements for IEEE 802.11a WLAN standard.

Keywords: CMOS LC VCO, spiral inductor, varactor, phase noise, tuning range

Procedia PDF Downloads 522
8104 A Case Study Approach on Co-Constructing the Idea of 'Safety' with Children

Authors: Beng Zhen Yeow

Abstract:

In most work that involves children, the voice of the children is often not heard. This is ironic since a lot of discussions might involve their welfare and safety. It might seem natural that the professionals should hear from them about what they wish for instead of deciding what is best for them. However, this, unfortunately, might be more the exception than the norm in most case and hence in many instances, children are merely 'subjects' in conversations about safety instead of active participants in the construction or creation of safety in the family. There might be many reasons why it does not happen in our work. Firstly, professionals have learnt how to 'socialise' into their professional roles and hence in the process become 'un-childlike'. Secondly, there is also a lack of professional training with regards to how to talk with children. Finally, there might be also a lack of concrete tools and techniques that are developed to facilitate the process. In this paper, the case study method is used to show how the idea of safety could be concretised and discussed with children and their family members, and hence making them active participants and co-creators of their own safety. Specific skills and techniques are highlighted through the case study. In this case, there was improvement in outcomes like no repeated offence or abuse. In addition, children were also able to advocate for their own safety after six months of intervention and how the family members were able to explicitly say what they can do to improve safety. The professionals in the safety network reported significant improvements. On top of that, the abused child who was removed due to child protection concerns, had verbalized observations of change in mother’s parenting abilities, and has requested for home leave to begin due to ownership of safety planning and having confidence to co-create safety for her siblings and herself together with the professionals in the safety network. Children becoming active participants in the co-creation of safety not only serve the purpose in allowing them to own a 'voice' but at the same time, give them greater confidence to protect themselves at home and in other contexts outside of home.

Keywords: partnering for safety, collaborative social work, family and systemic psychotherapy, child protection

Procedia PDF Downloads 111
8103 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis

Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha

Abstract:

Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.

Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier

Procedia PDF Downloads 459
8102 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform

Authors: Omaima N. Ahmad AL-Allaf

Abstract:

Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.

Keywords: image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform

Procedia PDF Downloads 214
8101 Pre-Implementation of Total Body Irradiation Using Volumetric Modulated Arc Therapy: Full Body Anthropomorphic Phantom Development

Authors: Susana Gonçalves, Joana Lencart, Anabela Gregório Dias

Abstract:

Introduction: In combination with chemotherapy, Total Body Irradiation (TBI) is most used as part of the conditioning regimen prior to allogeneic hematopoietic stem cell transplantation. Conventional TBI techniques have a long application time but non-conformality of beam-application with the inability to individually spare organs at risk. Our institution’s intention is to start using Volumetric Modulated Arc Therapy (VMAT) techniques to increase homogeneity of delivered radiation. As a first approach, a dosimetric plan was performed on a computed tomography (CT) scan of a Rando Alderson antropomorfic phantom (head and torso), using a set of six arcs distributed along the phantom. However, a full body anthropomorphic phantom is essential to carry out technique validation and implementation. Our aim is to define the physical and chemical characteristics and the ideal manufacturing procedure of upper and lower limbs to our anthropomorphic phantom, for later validate TBI using VMAT. Materials and Methods: To study the better fit between our phantom and limbs, a CT scan of Rando Alderson anthropomorphic phantom was acquired. CT was performed on GE Healthcare equipment (model Optima CT580 W), with slice thickness of 2.5 mm. This CT was also used to access the electronic density of soft tissue and bone through Hounsfield units (HU) analysis. Results: CT images were analyzed and measures were made for the ideal upper and lower limbs. Upper limbs should be build under the following measures: 43cm length and 7cm diameter (next to the shoulder section). Lower limbs should be build under the following measures: 79cm length and 16.5cm diameter (next to the thigh section). As expected, soft tissue and bone have very different electronic density. This is important to choose and analyze different materials to better represent soft tissue and bone characteristics. The approximate HU values of the soft tissue and for bone shall be 35HU and 250HU, respectively. Conclusion: At the moment, several compounds are being developed based on different types of resins and additives in order to be able to control and mimic the various constituent densities of the tissues. Concurrently, several manufacturing techniques are being explored to make it possible to produce the upper and lower limbs in a simple and non-expensive way, in order to finally carry out a systematic and appropriate study of the total body irradiation. This preliminary study was a good starting point to demonstrate the feasibility of TBI with VMAT.

Keywords: TBI, VMAT, anthropomorphic phantom, tissue equivalent materials

Procedia PDF Downloads 65
8100 Passive Vibration Isolation Analysis and Optimization for Mechanical Systems

Authors: Ozan Yavuz Baytemir, Ender Cigeroglu, Gokhan Osman Ozgen

Abstract:

Vibration is an important issue in the design of various components of aerospace, marine and vehicular applications. In order not to lose the components’ function and operational performance, vibration isolation design involving the optimum isolator properties selection and isolator positioning processes appear to be a critical study. Knowing the growing need for the vibration isolation system design, this paper aims to present two types of software capable of implementing modal analysis, response analysis for both random and harmonic types of excitations, static deflection analysis, Monte Carlo simulations in addition to study of parameter and location optimization for different types of isolation problem scenarios. Investigating the literature, there is no such study developing a software-based tool that is capable of implementing all those analysis, simulation and optimization studies in one platform simultaneously. In this paper, the theoretical system model is generated for a 6-DOF rigid body. The vibration isolation system of any mechanical structure is able to be optimized using hybrid method involving both global search and gradient-based methods. Defining the optimization design variables, different types of optimization scenarios are listed in detail. Being aware of the need for a user friendly vibration isolation problem solver, two types of graphical user interfaces (GUIs) are prepared and verified using a commercial finite element analysis program, Ansys Workbench 14.0. Using the analysis and optimization capabilities of those GUIs, a real application used in an air-platform is also presented as a case study at the end of the paper.

Keywords: hybrid optimization, Monte Carlo simulation, multi-degree-of-freedom system, parameter optimization, location optimization, passive vibration isolation analysis

Procedia PDF Downloads 551
8099 The Application of Lesson Study Model in Writing Review Text in Junior High School

Authors: Sulastriningsih Djumingin

Abstract:

This study has some objectives. It aims at describing the ability of the second-grade students to write review text without applying the Lesson Study model at SMPN 18 Makassar. Second, it seeks to describe the ability of the second-grade students to write review text by applying the Lesson Study model at SMPN 18 Makassar. Third, it aims at testing the effectiveness of the Lesson Study model in writing review text at SMPN 18 Makassar. This research was true experimental design with posttest Only group design involving two groups consisting of one class of the control group and one class of the experimental group. The research populations were all the second-grade students at SMPN 18 Makassar amounted to 250 students consisting of 8 classes. The sampling technique was purposive sampling technique. The control class was VIII2 consisting of 30 students, while the experimental class was VIII8 consisting of 30 students. The research instruments were in the form of observation and tests. The collected data were analyzed using descriptive statistical techniques and inferential statistical techniques with t-test types processed using SPSS 21 for windows. The results shows that: (1) of 30 students in control class, there are only 14 (47%) students who get the score more than 7.5, categorized as inadequate; (2) in the experimental class, there are 26 (87%) students who obtain the score of 7.5, categorized as adequate; (3) the Lesson Study models is effective to be applied in writing review text. Based on the comparison of the ability of the control class and experimental class, it indicates that the value of t-count is greater than the value of t-table (2.411> 1.667). It means that the alternative hypothesis (H1) proposed by the researcher is accepted.

Keywords: application, lesson study, review text, writing

Procedia PDF Downloads 188
8098 Effect of Modification and Expansion on Emergence of Cooperation in Demographic Multi-Level Donor-Recipient Game

Authors: Tsuneyuki Namekata, Yoko Namekata

Abstract:

It is known that the mean investment evolves from a very low initial value to some high level in the Continuous Prisoner's Dilemma. We examine how the cooperation level evolves from a low initial level to a high level in our Demographic Multi-level Donor-Recipient situation. In the Multi-level Donor-Recipient game, one player is selected as a Donor and the other as a Recipient randomly. The Donor has multiple cooperative moves and one defective move. A cooperative move means the Donor pays some cost for the Recipient to receive some benefit. The more cooperative move the Donor takes, the higher cost the Donor pays and the higher benefit the Recipient receives. The defective move has no effect on them. Two consecutive Multi-level Donor-Recipient games, one as a Donor and the other as a Recipient, can be viewed as a discrete version of the Continuous Prisoner's Dilemma. In the Demographic Multi-level Donor-Recipient game, players are initially distributed spatially. In each period, players play multiple Multi-level Donor-Recipient games against other players. He leaves offspring if possible and dies because of negative accumulated payoff of him or his lifespan. Cooperative moves are necessary for the survival of the whole population. There is only a low level of cooperative move besides the defective move initially available in strategies of players. A player may modify and expand his strategy by his recent experiences or practices. We distinguish several types of a player about modification and expansion. We show, by Agent-Based Simulation, that introducing only the modification increases the emergence rate of cooperation and introducing both the modification and the expansion further increases it and a high level of cooperation does emerge in our Demographic Multi-level Donor-Recipient Game.

Keywords: agent-based simulation, donor-recipient game, emergence of cooperation, spatial structure, TFT, TF2T

Procedia PDF Downloads 354
8097 Mitigation Strategies in the Urban Context of Sydney, Australia

Authors: Hamed Reza Heshmat Mohajer, Lan Ding, Mattheos Santamouris

Abstract:

One of the worst environmental dangers for people who live in cities is the Urban Heat Island (UHI) impact which is anticipated to become stronger in the coming years as a result of climate change. Accordingly, the key aim of this paper is to study the interaction between the urban configuration and mitigation strategies including increasing albedo of the urban environment (reflective material), implementation of Urban Green Infrastructure (UGI) and/or a combination thereof. To analyse the microclimate models of different urban categories in the metropolis of Sydney, this study will assess meteorological parameters using a 3D model simulation tool of computational fluid dynamics (CFD) named ENVI-met. In this study, four main parameters are taken into consideration while assessing the effectiveness of UHI mitigation strategies: ambient air temperature, wind speed/direction, and outdoor thermal comfort. Layouts with present condition simulation studies from the basic model (scenario one) are taken as the benchmark. A base model is used to calculate the relative percentage variations between each scenario. The findings showed that maximum cooling potential across different urban layouts can be decreased by 2.15 °C degrees by combining high-albedo material with flora; besides layouts with open arrangements(OT1) present a highly remarkable improvement in ambient air temperature and outdoor thermal comfort when mitigation technologies applied compare to compact counterparts. Besides all layouts present a higher intensity on the maximum ambient air temperature reduction rather than the minimum ambient air temperature. On the other hand, Scenarios associated with an increase in greeneries are anticipated to have a slight cooling effect, especially on high-rise layouts.

Keywords: sustainable urban development, urban green infrastructure, high-albedo materials, heat island effect

Procedia PDF Downloads 77
8096 A Single Loop Repetitive Controller for a Four Legs Matrix Converter Unit

Authors: Wesam Rohouma

Abstract:

The aim of this paper is to investigate the use of repetitive controller to regulate the output voltage of three phase four leg matric converter for an Aircraft Ground Power Supply Unit. The proposed controller improve the steady state error and provide good regulation during different loading. Simulation results of 7.5 KW converter are presented to verify the operation of the proposed controller.

Keywords: matrix converter, Power electronics, controller, regulation

Procedia PDF Downloads 1496
8095 Empowering Transformers for Evidence-Based Medicine

Authors: Jinan Fiaidhi, Hashmath Shaik

Abstract:

Breaking the barrier for practicing evidence-based medicine relies on effective methods for rapidly identifying relevant evidence from the body of biomedical literature. An important challenge confronted by medical practitioners is the long time needed to browse, filter, summarize and compile information from different medical resources. Deep learning can help in solving this based on automatic question answering (Q&A) and transformers. However, Q&A and transformer technologies are not trained to answer clinical queries that can be used for evidence-based practice, nor can they respond to structured clinical questioning protocols like PICO (Patient/Problem, Intervention, Comparison and Outcome). This article describes the use of deep learning techniques for Q&A that are based on transformer models like BERT and GPT to answer PICO clinical questions that can be used for evidence-based practice extracted from sound medical research resources like PubMed. We are reporting acceptable clinical answers that are supported by findings from PubMed. Our transformer methods are reaching an acceptable state-of-the-art performance based on two staged bootstrapping processes involving filtering relevant articles followed by identifying articles that support the requested outcome expressed by the PICO question. Moreover, we are also reporting experimentations to empower our bootstrapping techniques with patch attention to the most important keywords in the clinical case and the PICO questions. Our bootstrapped patched with attention is showing relevancy of the evidence collected based on entropy metrics.

Keywords: automatic question answering, PICO questions, evidence-based medicine, generative models, LLM transformers

Procedia PDF Downloads 23
8094 Overview of Pre-Analytical Lab Errors in a Tertiary Care Hospital at Rawalpindi, Pakistan

Authors: S. Saeed, T. Butt, M. Rehan, S. Khaliq

Abstract:

Objective: To determine the frequency of pre-analytical errors in samples taken from patients for various lab tests at Fauji Foundation Hospital, Rawalpindi. Material and Methods: All the lab specimens for diagnostic purposes received at the lab from Fauji Foundation hospital, Rawalpindi indoor and outdoor patients were included. Total number of samples received in the lab is recorded in the computerized program made for the hospital. All the errors observed for pre-analytical process including patient identification, sampling techniques, test collection procedures, specimen transport/processing and storage were recorded in the log book kept for the purpose. Results: A total of 476616 specimens were received in the lab during the period of study including 237931 and 238685 from outdoor and indoor patients respectively. Forty-one percent of the samples (n=197976) revealed pre-analytical discrepancies. The discrepancies included Hemolyzed samples (34.8%), Clotted blood (27.8%), Incorrect samples (17.4%), Unlabeled samples (8.9%), Insufficient specimens (3.9%), Request forms without authorized signature (2.9%), Empty containers (3.9%) and tube breakage during centrifugation (0.8%). Most of these pre-analytical discrepancies were observed in samples received from the wards revealing that inappropriate sample collection by the medical staff of the ward, as most of the outdoor samples are collected by the lab staff who are properly trained for sample collection. Conclusion: It is mandatory to educate phlebotomists and paramedical staff particularly performing duties in the wards regarding timing and techniques of sampling/appropriate container to use/early delivery of the samples to the lab to reduce pre-analytical errors.

Keywords: pre analytical lab errors, tertiary care hospital, hemolyzed, paramedical staff

Procedia PDF Downloads 197
8093 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling

Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie

Abstract:

Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.

Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling

Procedia PDF Downloads 73
8092 Numerical Simulation of Air Pollutant Using Coupled AERMOD-WRF Modeling System over Visakhapatnam: A Case Study

Authors: Amit Kumar

Abstract:

Accurate identification of deteriorated air quality regions is very helpful in devising better environmental practices and mitigation efforts. In the present study, an attempt has been made to identify the air pollutant dispersion patterns especially NOX due to vehicular and industrial sources over a rapidly developing urban city, Visakhapatnam (17°42’ N, 83°20’ E), India, during April 2009. Using the emission factors of different vehicles as well as the industry, a high resolution 1 km x 1 km gridded emission inventory has been developed for Visakhapatnam city. A dispersion model AERMOD with explicit representation of planetary boundary layer (PBL) dynamics and offline coupled through a developed coupler mechanism with a high resolution mesoscale model WRF-ARW resolution for simulating the dispersion patterns of NOX is used in the work. The meteorological as well as PBL parameters obtained by employing two PBL schemes viz., non-local Yonsei University (YSU) and local Mellor-Yamada-Janjic (MYJ) of WRF-ARW model, which are reasonably representing the boundary layer parameters are considered for integrating AERMOD. Significantly different dispersion patterns of NOX have been noticed between summer and winter months. The simulated NOX concentration is validated with available six monitoring stations of Central Pollution Control Board, India. Statistical analysis of model evaluated concentrations with the observations reveals that WRF-ARW of YSU scheme with AERMOD has shown better performance. The deteriorated air quality locations are identified over Visakhapatnam based on the validated model simulations of NOX concentrations. The present study advocates the utility of tNumerical Simulation of Air Pollutant Using Coupled AERMOD-WRF Modeling System over Visakhapatnam: A Case Studyhe developed gridded emission inventory of NOX with coupled WRF-AERMOD modeling system for air quality assessment over the study region.

Keywords: WRF-ARW, AERMOD, planetary boundary layer, air quality

Procedia PDF Downloads 264
8091 Preparation of Conductive Composite Fiber by the Reduction of Silver Particles onto Hydrolyzed Polyacrylonitrile Fiber

Authors: Z. Okay, M. Kalkan Erdoğan, M. Şahin, M. Saçak

Abstract:

Polyacrylonitrile (PAN) is one of the most common and cheap fiber-forming polymers because of its high strength and high abrasion resistance properties. The result of alkaline hydrolysis of PAN fiber could be formed the products with conjugated sequences of –C=N–, acrylamide, sodium acrylate, and amidine. In this study, PAN fiber was hydrolyzed in a solution of sodium hydroxide, and this hydrolyzed PAN (HPAN) fiber was used to prepare conductive composite fiber by silver particles. The electrically conductive PAN fiber has the usage potential to produce variety of materials such as antistatic materials, life jackets and static charge reducing products. We monitored the change in the weight loss values of the PAN fiber with hydrolysis time. It was observed that a 60 % of weight loss was obtained in the fiber weight after 7h hydrolysis under the investigated conditions, but the fiber lost its fibrous structure. The hydrolysis time of 5h was found to be suitable in terms of preserving its fibrous structure. The change in the conductivity values of the composite with the preparation conditions such as hydrolysis time, silver ion concentration was studied. PAN fibers with different degrees of hydrolysis were treated with aqueous solutions containing different concentrations of silver ions by continuous stirring at 20 oC for 30 min, and the composite having the maximum conductivity of 2 S/cm could be prepared. The antibacterial property of the conductive HPAN fibers participated silver was also investigated. While the hydrolysis of the PAN fiber was characterized with FTIR and SEM techniques, the silver reduction process of the HPAN fiber was investigated with SEM and TGA-DTA techniques. The SEM micrographs showed that the surface of HPAN fiber was rougher and much more corroded than that of the PAN fiber. Composite, Conducting polymer, Fiber, Polyacrylonitrile.

Keywords: composite, conducting polymer, fiber, polyacrylonitrile

Procedia PDF Downloads 460
8090 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 61
8089 Simplified Modeling of Post-Soil Interaction for Roadside Safety Barriers

Authors: Charly Julien Nyobe, Eric Jacquelin, Denis Brizard, Alexy Mercier

Abstract:

The performance of road side safety barriers depends largely on the dynamic interactions between post and soil. These interactions play a key role in the response of barriers to crash testing. In the literature, soil-post interaction is modeled in crash test simulations using three approaches. Many researchers have initially used the finite element approach, in which the post is embedded in a continuum soil modelled by solid finite elements. This method represents a more comprehensive and detailed approach, employing a mesh-based continuum to model the soil’s behavior and its interaction with the post. Although this method takes all soil properties into account, it is nevertheless very costly in terms of simulation time. In the second approach, all the points of the post located at a predefined depth are fixed. Although this approach reduces CPU computing time, it overestimates soil-post stiffness. The third approach involves modeling the post as a beam supported by a set of nonlinear springs in the horizontal directions. For support in the vertical direction, the posts were constrained at a node at ground level. This approach is less costly, but the literature does not provide a simple procedure to determine the constitutive law of the springs The aim of this study is to propose a simple and low-cost procedure to obtain the constitutive law of nonlinear springs that model the soil-post interaction. To achieve this objective, we will first present a procedure to obtain the constitutive law of nonlinear springs thanks to the simulation of a soil compression test. The test consists in compressing the soil contained in the tank by a rigid solid, up to a vertical displacement of 200 mm. The resultant force exerted by the ground on the rigid solid and its vertical displacement are extracted and, a force-displacement curve was determined. The proposed procedure for replacing the soil with springs must be tested against a reference model. The reference model consists of a wooden post embedded into the ground and impacted with an impactor. Two simplified models with springs are studied. In the first model, called Kh-Kv model, the springs are attached to the post in the horizontal and vertical directions. The second Kh model is the one described in the literature. The two simplified models are compared with the reference model according to several criteria: the displacement of a node located at the top of the post in vertical and horizontal directions; displacement of the post's center of rotation and impactor velocity. The results given by both simplified models are very close to the reference model results. It is noticeable that the Kh-Kv model is slightly better than the Kh model. Further, the former model is more interesting than the latter as it involves less arbitrary conditions. The simplified models also reduce the simulation time by a factor 4. The Kh-Kv model can therefore be used as a reliable tool to represent the soil-post interaction in a future research and development of road safety barriers.

Keywords: crash tests, nonlinear springs, soil-post interaction modeling, constitutive law

Procedia PDF Downloads 8
8088 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection

Authors: Mahshid Arabi

Abstract:

With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.

Keywords: data protection, digital technologies, information security, modern management

Procedia PDF Downloads 14
8087 Homogenization of a Non-Linear Problem with a Thermal Barrier

Authors: Hassan Samadi, Mustapha El Jarroudi

Abstract:

In this work, we consider the homogenization of a non-linear problem in periodic medium with two periodic connected media exchanging a heat flux throughout their common interface. The interfacial exchange coefficient λ is assumed to tend to zero or to infinity following a rate λ=λ(ε) when the size ε of the basic cell tends to zero. Three homogenized problems are determined according to some critical value depending of λ and ε. Our method is based on Γ-Convergence techniques.

Keywords: variational methods, epiconvergence, homogenization, convergence technique

Procedia PDF Downloads 509
8086 Contactless Electromagnetic Detection of Stress Fluctuations in Steel Elements

Authors: M. A. García, J. Vinolas, A. Hernando

Abstract:

Steel is nowadays one of the most important structural materials because of its outstanding mechanical properties. Therefore, in order to look for a sustainable economic model and to optimize the use of extensive resources, new methods to monitor and prevent failure of steel-based facilities are required. The classical mechanical tests, as for instance building tasting, are invasive and destructive. Moreover, for facilities where the steel element is embedded, (as reinforced concrete) these techniques are directly non applicable. Hence, non-invasive monitoring techniques to prevent failure, without altering the structural properties of the elements are required. Among them, electromagnetic methods are particularly suitable for non-invasive inspection of the mechanical state of steel-based elements. The magnetoelastic coupling effects induce a modification of the electromagnetic properties of an element upon applied stress. Since most steels are ferromagnetic because of their large Fe content, it is possible to inspect their structure and state in a non-invasive way. We present here a distinct electromagnetic method for contactless evaluation of internal stress in steel-based elements. In particular, this method relies on measuring the magnetic induction between two coils with the steel specimen in between them. We found that the alteration of electromagnetic properties of the steel specimen induced by applied stress-induced changes in the induction allowed us to detect stress well below half of the elastic limit of the material. Hence, it represents an outstanding non-invasive method to prevent failure in steel-based facilities. We here describe the theoretical model, present experimental results to validate it and finally we show a practical application for detection of stress and inhomogeneities in train railways.

Keywords: magnetoelastic, magnetic induction, mechanical stress, steel

Procedia PDF Downloads 26
8085 Geometric Imperfections in Lattice Structures: A Simulation Strategy to Predict Strength Variability

Authors: Xavier Lorang, Ahmadali Tahmasebimoradi, Chetra Mang, Sylvain Girard

Abstract:

The additive manufacturing processes (e.g. selective laser melting) allow us to produce lattice structures which have less weight, higher impact absorption capacity, and better thermal exchange property compared to the classical structures. Unfortunately, geometric imperfections (defects) in the lattice structures are by-products results of the manufacturing process. These imperfections decrease the lifetime and the strength of the lattice structures and alternate their mechanical responses. The objective of the paper is to present a simulation strategy which allows us to take into account the effect of the geometric imperfections on the mechanical response of the lattice structure. In the first part, an identification method of geometric imperfection parameters of the lattice structure based on point clouds is presented. These point clouds are based on tomography measurements. The point clouds are fed into the platform LATANA (LATtice ANAlysis) developed by IRT-SystemX to characterize the geometric imperfections. This is done by projecting the point clouds of each microbeam along the beam axis onto a 2D surface. Then, by fitting an ellipse to the 2D projections of the points, the geometric imperfections are characterized by introducing three parameters of an ellipse; semi-major/minor axes and angle of rotation. With regard to the calculated parameters of the microbeam geometric imperfections, a statistical analysis is carried out to determine a probability density law based on a statistical hypothesis. The microbeam samples are randomly drawn from the density law and are used to generate lattice structures. In the second part, a finite element model for the lattice structure with the simplified geometric imperfections (ellipse parameters) is presented. This numerical model is used to simulate the generated lattice structures. The propagation of the uncertainties of geometric imperfections is shown through the distribution of the computed mechanical responses of the lattice structures.

Keywords: additive manufacturing, finite element model, geometric imperfections, lattice structures, propagation of uncertainty

Procedia PDF Downloads 174